]>
Version 4, Tue, 08 Sep 2020 16:58:36 +0200
The present work shows a technique for getting rid of jump discontinuities in nonlinear programs. Functions in nonlinear programs that contain jump discontinuities are firstly expressed with the help of the step() function. The resulting step() function values are further replaced by linear and, if such a function consists of more than one jump discontinuity, also quadratic expressions constructed from sign() functions. The sign() function values are then provided via additional variables defined by additional equalities that contain abs() functions, which in turn are finally made available through linear equalities with nonnegative auxiliary variables that are aligned by means of special penalties. At least the formulation of the latter part is also known under the name Mathematical Programs with Equilibrium Constraints (MPEC). The presented technique at all works as long as the prerequisites to replace the abs() functions are given. The elimination of one jump discontinuity here requires three additional variables, two additional equality constraints where one of them is linear and the other quadratic, six additional linear inequality constraints, and one additional quadratic penalty. The technique has successfully been applied to solve TP 87 in [7].
It has still to be checked that the stuff that is called Mathematical Programs with Equilibrium Constraints in [2] is the same as the one with the corresponding name in [4], or, that the regarding material in [2] at least belongs to a subtopic [4] deals with. Studying [4] should also give a deeper insight to clarify whether the presented approach here works always or only under certain conditions that are not obvious upon ordinary considerations. In short, I have to read the book someday.
For the representation, this page uses MathJax, a set of JavaScript scripts to process the mathematics. MathJax is configured here such that all of us see the formulæ typesetting based on the same fonts. Good. On the other side, it seems that currently only some of us are able to print the present document in an acceptable manner, except with the X Window System window dumping utility, xwd, of course. Those who run a browser that says about itself Mozilla/5.0 (X11; Linux; rv:31.1.1) Gecko/20140101 Firefox/31.1.1 Iceweasel/31.1.1 belong to the lucky ones. That browser generates a perfect PDF file from the text here.
For the change log click Changelog.
Comments, bug reports, and better ideas are welcome.
I am grateful to John D. Hedengren, Provo UT, for providing MPEC related model files through the Internet, taken as the starting point for the present work.
[1] | A. Griewank:
On stable piecewise linearization and generalized algorithmic differentiation. Optimization Methods & Software, http://dx.doi.org/10.1080/10556788.2013.796683 (2013) |
[2] | J. D. Hedengren:
APMonitor Documentation: MPEC Examples. http://www.apmonitor.com/wiki/index.php/Apps/MpecExamples/ |
[3] | W. Hock and K. Schittkowski:
Test Examples for Nonlinear Programming Codes. Vol 187 of Lecture Notes in Economics and Mathematical Systems (Springer 1981) |
[4] | Z.-Q. Luo, J.-S. Pang and D. Ralph:
Mathematical Programs with Equilibrium Constraints. (Cambridge University Press, 1996) ISBN 0-521-57290-8 |
[5] | K. Schittkowski:
Test Examples for Nonlinear Programming Codes: All Problems from the Hock-Schittkowski-Collection. http://www.ai7.uni-bayreuth.de/test_problem_coll.pdf (2009), Taken away from the Internet |
[6] | K. Schittkowski:
An Updated Set of 306 Test Problems for Nonlinear Programming with Validated Optimal Solutions, User's Guide. http://www.ai7.uni-bayreuth.de/test_problems.pdf and http://www.ai7.uni-bayreuth.de/test_probs_src.zip, file PROB.FOR (2011), Both taken away from the Internet |
[7] | S. K. H. Seidl:
Revised Hock & Schittkowski Models for Automatable Test Scenarios. https://www.stfmc.de/fmc/rhs/x/tlf.html (2014) |
Considering real-world processes and systems as nonlinear programs that have to be solved, the appearance of discontinuities and poles in the objective function and/or the constraints is quite a normal thing. While poles can commonly be transformed into less aggressive discontinuities via variables substitutions, jump discontinuities can typically not. Jump discontinuities come either from the considered object itself, such an object can, for example, be thought of as a transmission system with a gearbox involved, or from the fact that the applied theoretical foundations do not accurately reflect the behavior of a given complex matter. So, nature does not know what an Heaviside step function is, whereas scientists and engineers do.
An obvious way to treat jump discontinuities is to replace them by transition domains of finite extension. Test problem 87-1 in [7] shows an appropriate case. With the small transition domains there, the objective function becomes fully differentiable down to the 2-nd derivative, inclusively. The problem dimensions are conserved. Good solvers will solve such kind of problems, but the result will or will not be satisfying. If namely a variable lies, on exit, inside a transition domain, then the solution that even the best solver yields will be sensible but will not be exact. Hence, one has to look for other methods to exhaustively tackle jump discontinuities.
Griewank [1] sketches how functions involving the absolute value function $\seidlMathFunctionAbs()$ can be approximated locally by piecewise-linear approaches. Hedengren [2] shows a way to represent the functions $\seidlMathFunctionAbs()$, $\max()$, $\min()$ and $\seidlMathFunctionSign()$ by means of additional variables, constraints and penalties. The applied technique there is referred to as Mathematical Programs with Equilibrium Constraints (MPEC) such that one should directly be led to [4].
With [1] and [2], the author believes that the most basic element to construct the other annotated functions is $\seidlMathFunctionAbs()$. So it is on principle not too difficult to build $\seidlMathFunctionAbs()$ with the help of two non-negative variables, an equality, and a penalty, i.e., to build $\seidlMathFunctionAbs()$ in accordance with the MPEC idea. After having made $\seidlMathFunctionAbs()$ available, it immediately follows for the maximum and the minimum of two real numbers $\max(x_{\seidlMathIndex{1}},x_{\seidlMathIndex{2}})=% (x_{\seidlMathIndex{1}}+x_{\seidlMathIndex{2}}+% \seidlMathFunctionAbs(x_{\seidlMathIndex{1}}-x_{\seidlMathIndex{2}}))/2$ and $\min(x_{\seidlMathIndex{1}},x_{\seidlMathIndex{2}})=% (x_{\seidlMathIndex{1}}+x_{\seidlMathIndex{2}}-% \seidlMathFunctionAbs(x_{\seidlMathIndex{1}}-x_{\seidlMathIndex{2}}))/2$. With respect to $\seidlMathFunctionSign()$ the things are not as easy. $\seidlMathFunctionSign()$ maps a real argument onto two isolated points. So it naturally represents the low-level element to form binary case distinctions. What MPEC finally does in the context of $\seidlMathFunctionSign()$ is introducing additional degrees of freedom and using them to enable channels that allow signum-type variables to continuously change their values between $\pm1$. That way the two initially isolated points seem to belong to a somehow connected domain. The art with MPEC here is to ensure that those signum-type variables can, on one hand, tunnel through the artificially created channels but do, on the other hand, not linger inside them, i.e., to ensure that signum-type variables actually adopt the values $\pm1$ at end. Once having the $\seidlMathFunctionSign()$ function available, the $\seidlMathFunctionStep()$ function (\ref{stepfunction}) can straightforwardly be introduced to finally represent expressions with jump discontinuities at a comfortably high level.
It will furthermore be seen that, if the procedure to construct $\seidlMathFunctionAbs()$ is assumed to work reliably, then any jump discontinuities can reliably be removed introducing additional variables, constraints and penalties. In particular, it will be shown that the elimination of one jump discontinuity here requires three additional variables, two additional equality constraints where one of them is linear and the other quadratic, six additional linear inequality constraints, i.e. two bounds for each introduced variable, and one additional quadratic penalty.
The procedure to remove jump discontinuities with respect to single variables is illuminated in the following by means of the example 87-2 in [7].
For the sake of convenience TP 87 initially defines five parameters. Unfortunately, the value that is given for $b$ in [3] and [5], 1.48577, is wrong. It has to be replaced by 1.48477, as defined in PROB.FOR in [6]. With 1.48477, the results presented in [3] and [5] can be reproduced whereas the latter is not possible with 1.48577, see [7]. $$\begin{align} \label{hs87:parameters:a}a\;&=\;131.078\\ \label{hs87:parameters:b}b\;&=\;1.48477\\ \label{hs87:parameters:c}c\;&=\;.90798\\ \label{hs87:parameters:d}d\;&=\;\cos(1.47588)\\ \label{hs87:parameters:e}e\;&=\;\sin(1.47588)\\ \end{align}$$ TP 87 further requires that the six variables $x_{\seidlMathIndex{1}}\ldots\;\!x_{\seidlMathIndex{6}}$ fulfill the following four equalities. $$\begin{align} \label{hs87:equalities:1}300-x_{\seidlMathIndex{1}}-x_{\seidlMathIndex{3}}\;\!x_{\seidlMathIndex{4}}% \cos(b-x_{\seidlMathIndex{6}})/a+c\;\!d\;\!x_{\seidlMathIndex{3}}^2/a\;&=\;0\\ \label{hs87:equalities:2}(-1)x_{\seidlMathIndex{2}}-x_{\seidlMathIndex{3}}\;\!x_{\seidlMathIndex{4}}% \cos(b+x_{\seidlMathIndex{6}})/a+c\;\!d\;\!x_{\seidlMathIndex{4}}^2/a\;&=\;0\\ \label{hs87:equalities:3}(-1)x_{\seidlMathIndex{5}}-x_{\seidlMathIndex{3}}\;\!x_{\seidlMathIndex{4}}% \sin(b+x_{\seidlMathIndex{6}})/a+c\;\!e\;\!x_{\seidlMathIndex{4}}^2/a\;&=\;0\\ \label{hs87:equalities:4}200-x_{\seidlMathIndex{3}}\;\!x_{\seidlMathIndex{4}}% \sin(b-x_{\seidlMathIndex{6}})/a+c\;\!e\;\!x_{\seidlMathIndex{3}}^2/a\;&=\;0 \end{align}$$ We introduce six more parameters to facilitate the discussion below. $$\begin{align} \label{hs87:boundparameters:1}\ell_{\seidlMathIndex{1}}\;&=\;0&u_{\seidlMathIndex{1}}\;&=\;400\\ \label{hs87:boundparameters:2}\ell_{\seidlMathIndex{2}}\;&=\;0&u_{\seidlMathIndex{2}}\;&=\;1000\\ \label{hs87:boundparameters:5}\ell_{\seidlMathIndex{5}}\;&=\;-18&u_{\seidlMathIndex{5}}\;&=\;-10.7 \end{align}$$ With (\ref{hs87:boundparameters:1}), (\ref{hs87:boundparameters:2}) and (\ref{hs87:boundparameters:5}), the box-type bounds for the six variables can be written as $$\begin{align} \label{hs87:bounds:1}\ell_{\seidlMathIndex{1}}\;\le\;x_{\seidlMathIndex{1}}\;&\le\;u_{\seidlMathIndex{1}}\\ \label{hs87:bounds:2}\ell_{\seidlMathIndex{2}}\;\le\;x_{\seidlMathIndex{2}}\;&\le\;u_{\seidlMathIndex{2}}\\ \label{hs87:bounds:3}340\;\le\;x_{\seidlMathIndex{3}}\;&\le\;420\\ \label{hs87:bounds:4}340\;\le\;x_{\seidlMathIndex{4}}\;&\le\;420\\ \label{hs87:bounds:5}\ell_{\seidlMathIndex{5}}\;\le\;x_{\seidlMathIndex{5}}\;&\le\;u_{\seidlMathIndex{5}}\\ \label{hs87:bounds:6}0\;\le\;x_{\seidlMathIndex{6}}\;&\le\;.5236\;\;\;. \end{align}$$ The parameters $l_{\seidlMathIndex{5}}$ and $u_{\seidlMathIndex{5}}$ in (\ref{hs87:boundparameters:5}) and (\ref{hs87:bounds:5}) need somewhat more be explained. The values that are given for $l_{\seidlMathIndex{5}}$ and $u_{\seidlMathIndex{5}}$ in [3] and [5] are (-1000) and 10000. PROB.FOR in [6] defines 1000 for $u_{\seidlMathIndex{5}}$. Further, the solution to TP 87 as provided through [3], [5] and [6] is suboptimal, i.e. one can find a better one, as in [7] for example. So, to prevent that any solver can lock into the secondary minimum described by [3], [5] and [6], or into another, the $x_{\seidlMathIndex{5}}$ margin has been reduced down to the values seen with (\ref{hs87:boundparameters:5}). That way the controversy whether 10000 or 1000 has to be taken for $u_{\seidlMathIndex{5}}$ becomes stale, and, together with the changes below, at least APMonitor, FMC, LANCELOT, LOQO, MINOS, and SNOPT prefer to converge against the best known solution. The intervention itself is quite clear because $x_{\seidlMathIndex{5}}$ occurs only once in the whole model description and the thrilling variables, $x_{\seidlMathIndex{1}}$ and $x_{\seidlMathIndex{2}}$ here, are left untouched.
Before we now come to the objective function a set of changeover parameters is defined. $$\begin{align} \label{hs87:changeoverparameters:r11}r_{\seidlMathIndex{11}}\;&=\;300\\ \label{hs87:changeoverparameters:r21}r_{\seidlMathIndex{21}}\;&=\;100\\ \label{hs87:changeoverparameters:r22}r_{\seidlMathIndex{22}}\;&=\;200 \end{align}$$ With (\ref{hs87:changeoverparameters:r11}), (\ref{hs87:changeoverparameters:r21}) and (\ref{hs87:changeoverparameters:r22}), the objective function of TP 87 appears as follows. $$\begin{align} \label{hs87:objectivefunction}f(\seidlMathVector{x})\;&=\;% f_{\seidlMathIndex{1}}(\seidlMathVector{x})+f_{\seidlMathIndex{2}}(\seidlMathVector{x})\\ \label{hs87:objectivefunction:1}f_{\seidlMathIndex{1}}(\seidlMathVector{x})\;&=% \;\!\left\{\begin{array}{@{}r@{}r@{}l@{}}% \;30\;\!x_{\seidlMathIndex{1}}\,,&% 0\;\le\;x_{\seidlMathIndex{1}}\;\lt&\!\!\!r_{\seidlMathIndex{11}}\\% \;31\;\!x_{\seidlMathIndex{1}}\,,&% r_{\seidlMathIndex{11}}\;\le\;x_{\seidlMathIndex{1}}\;\le&\!\!\!400% \end{array}\right.\\ \label{hs87:objectivefunction:2}f_{\seidlMathIndex{2}}(\seidlMathVector{x})\;&=% \left\{\begin{array}{@{}r@{}r@{}l@{}}% \;28\;\!x_{\seidlMathIndex{2}}\,,&% 0\;\le\;x_{\seidlMathIndex{2}}\;\lt&\!\!\!r_{\seidlMathIndex{21}}\\% \;29\;\!x_{\seidlMathIndex{2}}\,,&% r_{\seidlMathIndex{21}}\;\le\;x_{\seidlMathIndex{2}}\;\lt&\!\!\!r_{\seidlMathIndex{22}}\\% \;30\;\!x_{\seidlMathIndex{2}}\,,&% r_{\seidlMathIndex{22}}\;\le\;x_{\seidlMathIndex{2}}\;\le&\!\!\!1000% \end{array}\right. \end{align}$$ For the starting point, the value $$\label{hs87:startingpoint}{\seidlMathVector{x}}_{\seidlMathIndex{0}}=% (390,1000,419.5,340.5,-10.7,0.5)^{\;\!\!\mathsf{T}}$$ is chosen instead of ${\seidlMathVector{x}}_{\seidlMathIndex{0}}=(\;\ldots\;\!,198.175,0.5)^{\;\!\!\mathsf{T}}$ from [3], [5] and [6], to take the modified upper bound for $x_{\seidlMathIndex{5}}$ into account. The starting point of the original TP 87 is not feasible and the starting point belonging to the appropriate revised nonlinear program, i.e. the one here and in [7], behaves accordingly.
From the point of view of a modeler, (\ref{hs87:parameters:a}) through (\ref{hs87:startingpoint}) represent a problem that is neither strange nor invalid where on top of everything the global minimum is most likely enclosed. The best known solution actually lies on or at one of the jump discontinuities. On the other hand, from the point of view of a solver developer, the situation does not look as good. Superlinearly converging algorithms suffer severe damages if the course is not a smooth chute but appears as a sequence of stair treads. Recall how the Hessian is frequently built up. Already the latter should yield enough motivation to remove jump discontinuities at all. So we will begin here to do so.
Firstly, the objective function (\ref{hs87:objectivefunction}) is replaced by a one that is defined for all real variable values $-\infty\le{x_{\seidlMathIndex{1}}},{x_{\seidlMathIndex{2}}}\lt\infty$. $$\begin{align} \label{hs87:modifiedobjectivefunction}f_{\seidlMathIndex{m}}(\seidlMathVector{x})\;&=\;% f_{\seidlMathIndex{m1}}(\seidlMathVector{x})+f_{\seidlMathIndex{m2}}(\seidlMathVector{x})\\ \label{hs87:modifiedobjectivefunction:1}f_{\seidlMathIndex{m1}}(\seidlMathVector{x})\;&=% \;\!\left\{\begin{array}{@{}r@{}r@{}l@{}}% \;30\;\!x_{\seidlMathIndex{1}}\,,&% -\infty\;\le\;x_{\seidlMathIndex{1}}\;\lt&\!\!\!r_{\seidlMathIndex{11}}\\% \;31\;\!x_{\seidlMathIndex{1}}\,,&% r_{\seidlMathIndex{11}}\;\le\;x_{\seidlMathIndex{1}}\;\lt&\!\!\!\infty% \end{array}\right.\\ \label{hs87:modifiedobjectivefunction:2}f_{\seidlMathIndex{m2}}(\seidlMathVector{x})\;&=% \left\{\begin{array}{@{}r@{}r@{}l@{}}% \;28\;\!x_{\seidlMathIndex{2}}\,,&% -\infty\;\le\;x_{\seidlMathIndex{2}}\;\lt&\!\!\!r_{\seidlMathIndex{21}}\\% \;29\;\!x_{\seidlMathIndex{2}}\,,&% r_{\seidlMathIndex{21}}\;\le\;x_{\seidlMathIndex{2}}\;\lt&\!\!\!r_{\seidlMathIndex{22}}\\% \;30\;\!x_{\seidlMathIndex{2}}\,,&% r_{\seidlMathIndex{22}}\;\le\;x_{\seidlMathIndex{2}}\;\lt&\!\!\!\infty% \end{array}\right. \end{align}$$ Directly substituting (\ref{hs87:modifiedobjectivefunction}) for (\ref{hs87:objectivefunction}) is possible because (\ref{hs87:boundparameters:1}) and (\ref{hs87:boundparameters:2}) ensure that the program remains the same. It is furthermore seen that, in (\ref{hs87:modifiedobjectivefunction:1}) and (\ref{hs87:modifiedobjectivefunction:2}), the relation $$\label{hs87:strongmonotonicchangeoverparameters}% r_{\seidlMathIndex{i}\;\!\seidlMathIndex{j}}\lt\,\!r_{\seidlMathIndex{i}\,\,\seidlMathIndex{j+1}}$$ always holds. Hence, with (\ref{hs87:strongmonotonicchangeoverparameters}), the awful formulations (\ref{hs87:modifiedobjectivefunction:1}) and (\ref{hs87:modifiedobjectivefunction:2}) can elegantly be rewritten as $$\begin{align} \label{hs87:modifiedobjectivefunctionwithstep:1}f_{\seidlMathIndex{m1}}(\seidlMathVector{x})\;&=\;% 30\,x_{\seidlMathIndex{1}}\;\!\seidlMathFunctionStep(% -\infty,x_{\seidlMathIndex{1}},r_{\seidlMathIndex{11}})+\;\!% 31\,x_{\seidlMathIndex{1}}\;\!\seidlMathFunctionStep(% r_{\seidlMathIndex{11}},x_{\seidlMathIndex{1}},\infty)\\ \label{hs87:modifiedobjectivefunctionwithstep:2}f_{\seidlMathIndex{m2}}(\seidlMathVector{x})\;&=\;% 28\,x_{\seidlMathIndex{2}}\;\!\seidlMathFunctionStep(% -\infty,x_{\seidlMathIndex{2}},r_{\seidlMathIndex{21}})+\;\!% 29\,x_{\seidlMathIndex{2}}\;\!\seidlMathFunctionStep(% r_{\seidlMathIndex{21}},x_{\seidlMathIndex{2}},r_{\seidlMathIndex{22}})+\;\!% 30\,x_{\seidlMathIndex{2}}\;\!\seidlMathFunctionStep(% r_{\seidlMathIndex{22}},x_{\seidlMathIndex{2}},\infty)\;\;\;, \end{align}$$ where use has been made of $$\label{stepfunction}% \seidlMathFunctionStep(e_{\seidlMathIndex{1}},e_{\seidlMathIndex{2}},e_{\seidlMathIndex{3}})\,=% \left\{\begin{array}{@{}r@{}l@{}}% 1\,,&e_{\seidlMathIndex{1}}\le\;\!e_{\seidlMathIndex{2}}\lt\;\!e_{\seidlMathIndex{3}}\;\vee\;% e_{\seidlMathIndex{1}}=\;\!e_{\seidlMathIndex{2}}\;\!=\;\!e_{\seidlMathIndex{3}}\\ 0\,,&\mbox{otherwise}\;\;\;. \end{array}\right.$$ (\ref{stepfunction}) is the definition of the $\seidlMathFunctionStep()$ function which was first seen by the author in IBM's PL/I-FORMAC. (\ref{hs87:strongmonotonicchangeoverparameters}) means $e_{\seidlMathIndex{1}}{\lt\;\!}e_{\seidlMathIndex{3}}\;\!$ in (\ref{stepfunction}) such that $e_{\seidlMathIndex{1}}{=}\;\!e_{\seidlMathIndex{2}}\;\!{=}\;\!e_{\seidlMathIndex{3}}$ never applies there.
We introduce the $\seidlMathFunctionSignPlus()$ function being defined as follows. $$\label{signplusfunction}% \seidlMathFunctionSignPlus(e_{\seidlMathIndex{1}})\,=% \left\{\begin{array}{@{}r@{}l@{}}% +1\,,&e_{\seidlMathIndex{1}}\ge0\\ -1\,,&e_{\seidlMathIndex{1}}\lt0 \end{array}\right.$$ Once more, (\ref{hs87:strongmonotonicchangeoverparameters}) means $e_{\seidlMathIndex{1}}{\lt\;\!}e_{\seidlMathIndex{3}}\;\!$ in (\ref{stepfunction}), and for $e_{\seidlMathIndex{1}}{\lt\;\!}e_{\seidlMathIndex{3}}\;\!$, the $\seidlMathFunctionStep()$ function can be expressed by the $\seidlMathFunctionSignPlus()$ function. $$\label{stepfunctionreplacement}% \seidlMathFunctionStep(e_{\seidlMathIndex{1}}\;\!,\;\!e_{\seidlMathIndex{2}}\;\!,\;\!e_{\seidlMathIndex{3}})\,% \stackrel{e_{\seidlMathIndex{1}}{\lt\;\!}e_{\seidlMathIndex{3}}}{=}% \left(1+\seidlMathFunctionSignPlus(e_{\seidlMathIndex{2}}-e_{\seidlMathIndex{1}})\right)% \left(1-\seidlMathFunctionSignPlus(e_{\seidlMathIndex{2}}-e_{\seidlMathIndex{3}})\right)/\,4$$ With (\ref{stepfunctionreplacement}), the relations (\ref{hs87:modifiedobjectivefunctionwithstep:1}) and (\ref{hs87:modifiedobjectivefunctionwithstep:2}) can again be rewritten as $$\begin{align} f_{\seidlMathIndex{m1}}(\seidlMathVector{x})\;&=\;% 30\,x_{\seidlMathIndex{1}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-(-\infty))\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\right)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignplus:1}&+\;\;\!% 31\,x_{\seidlMathIndex{1}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-\infty)\right)/\,4\\ f_{\seidlMathIndex{m2}}(\seidlMathVector{x})\;&=\;% 28\,x_{\seidlMathIndex{2}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-(-\infty))\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\right)/\,4\nonumber\\ &+\;\;\!% 29\,x_{\seidlMathIndex{2}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\right)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignplus:2}&+\;\;\!% 30\,x_{\seidlMathIndex{2}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-\infty)\right)/\,4\;\;\;. \end{align}$$ Further, to obtain (\ref{hs87:modifiedobjectivefunction}) from (\ref{hs87:objectivefunction}), $-\infty\le{x_{\seidlMathIndex{1}}},{x_{\seidlMathIndex{2}}}\lt\infty$ was assumed. The same condition now serves to simplify (\ref{hs87:modifiedobjectivefunctionwithsignplus:1}) and (\ref{hs87:modifiedobjectivefunctionwithsignplus:2}). We get $$\begin{align} f_{\seidlMathIndex{m1}}(\seidlMathVector{x})\;&=\;% 30\,x_{\seidlMathIndex{1}}\;\!% \,\big(1+1\big)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\right)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignplussimplified:1}&+\;\;\!% 31\,x_{\seidlMathIndex{1}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\right)% \,\big(1-(-1)\big)/\,4\\ f_{\seidlMathIndex{m2}}(\seidlMathVector{x})\;&=\;% 28\,x_{\seidlMathIndex{2}}\;\!% \,\big(1+1\big)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\right)/\,4\nonumber\\ &+\;\;\!% 29\,x_{\seidlMathIndex{2}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\right)% \left(1-\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\right)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignplussimplified:2}&+\;\;\!% 30\,x_{\seidlMathIndex{2}}\;\!% \left(1+\seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\right)% \,\big(1-(-1)\big)/\,4\;\;\;. \end{align}$$ We are now at the point where MPEC as a method comes into the play.
Three new variables are introduced that have the following meaning. $$\begin{align} \label{hs87:substitutionrulesignplus:s11}% \seidlMathFunctionSignPlus(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\;&=\;s_{\seidlMathIndex{11}}\\ \label{hs87:substitutionrulesignplus:s21}% \seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\;&=\;s_{\seidlMathIndex{21}}\\ \label{hs87:substitutionrulesignplus:s22}% \seidlMathFunctionSignPlus(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\;&=\;s_{\seidlMathIndex{22}} \end{align}$$ Formally substituting the new variables for the $\seidlMathFunctionSignPlus()$ function values in accordance with (\ref{hs87:substitutionrulesignplus:s11}), (\ref{hs87:substitutionrulesignplus:s21}) and (\ref{hs87:substitutionrulesignplus:s22}) yields $$\begin{align} f_{\seidlMathIndex{m1}}(\seidlMathVector{x},s_{\seidlMathIndex{11}},\dots,\;\!s_{\seidlMathIndex{11}})\;&=\;% 30\,x_{\seidlMathIndex{1}}\,\big(1-s_{\seidlMathIndex{11}}\big)/\,2\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignvars:1}&+\;\;\!% 31\,x_{\seidlMathIndex{1}}\,\big(1+s_{\seidlMathIndex{11}}\big)/\,2\\ f_{\seidlMathIndex{m2}}(\seidlMathVector{x},s_{\seidlMathIndex{21}},\dots,\;\!s_{\seidlMathIndex{22}})\;&=\;% 28\,x_{\seidlMathIndex{2}}\,\big(1-s_{\seidlMathIndex{21}}\big)/\,2\nonumber\\ &+\;\;\!% 29\,x_{\seidlMathIndex{2}}\,\big(1+s_{\seidlMathIndex{21}}\big)\big(1-s_{\seidlMathIndex{22}}\big)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionwithsignvars:2}&+\;\;\!% 30\,x_{\seidlMathIndex{2}}\,\big(1+s_{\seidlMathIndex{22}}\big)/\,2 \end{align}$$ instead of (\ref{hs87:modifiedobjectivefunctionwithsignplussimplified:1}) and (\ref{hs87:modifiedobjectivefunctionwithsignplussimplified:2}). (\ref{hs87:modifiedobjectivefunctionwithsignvars:1}) and (\ref{hs87:modifiedobjectivefunctionwithsignvars:2}) look quite healthy in comparison with, for example, (\ref{hs87:modifiedobjectivefunction:1}) and (\ref{hs87:modifiedobjectivefunction:2}). If further on, $s_{\seidlMathIndex{11}}$, $s_{\seidlMathIndex{21}}$ and $s_{\seidlMathIndex{22}}$ are considered as variables that draw three more degrees of freedom into the program, then, with (\ref{hs87:substitutionrulesignplus:s11}), (\ref{hs87:substitutionrulesignplus:s21}) and (\ref{hs87:substitutionrulesignplus:s22}), it gets immediately clear that at least the following inequalities hold. $$\begin{align} \label{hs87:bounds:s11}-1\;\le\;s_{\seidlMathIndex{11}}\;&\le\;+1\\ \label{hs87:bounds:s21}-1\;\le\;s_{\seidlMathIndex{21}}\;&\le\;+1\\ \label{hs87:bounds:s22}-1\;\le\;s_{\seidlMathIndex{22}}\;&\le\;+1 \end{align}$$ What is not clear at this point is whether (\ref{hs87:bounds:s11}), (\ref{hs87:bounds:s21}) and (\ref{hs87:bounds:s22}) need actually be attached as inequality constraints. Though, the following deliberations will show that the latter is the case. To ensure that $s_{\seidlMathIndex{11}}$, $s_{\seidlMathIndex{21}}$ and $s_{\seidlMathIndex{22}}$ finally tend to $\pm1$ in a consistent manner further equalities are indispensable. There are several possibilities. $$\begin{align} \label{hs87:equalitieswithabs:s11}% s_{\seidlMathIndex{11}}\,|x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}}|-(% x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\;&=\;0\\ \label{hs87:equalitieswithabs:s21}% s_{\seidlMathIndex{21}}\,|x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}}|-(% x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\;&=\;0\\ \label{hs87:equalitieswithabs:s22}% s_{\seidlMathIndex{22}}\,|x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}}|-(% x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\;&=\;0 \end{align}$$ Equalities of the form (\ref{hs87:equalitieswithabs:s11}), (\ref{hs87:equalitieswithabs:s21}) and (\ref{hs87:equalitieswithabs:s22}) are preferred here to ones of the form $$\begin{align} \label{hs87:equalitieswithabsalt:s11}% s_{\seidlMathIndex{11}}-(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\,/\,|% x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}}|\;&=\;0\\ \label{hs87:equalitieswithabsalt:s21}% s_{\seidlMathIndex{21}}-(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\,/\,|% x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}}|\;&=\;0\\ \label{hs87:equalitieswithabsalt:s22}% s_{\seidlMathIndex{22}}-(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\,/\,|% x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}}|\;&=\;0\;\;\;, \end{align}$$ or even $$\begin{align} \label{hs87:equalitieswithabsaltalt:s11}% s_{\seidlMathIndex{11}}-(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\,/\,% \sqrt{(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})^2+\varepsilon^2}\;&=\;0\\ \label{hs87:equalitieswithabsaltalt:s21}% s_{\seidlMathIndex{21}}-(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\,/\,% \sqrt{(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})^2+\varepsilon^2}\;&=\;0\\ \label{hs87:equalitieswithabsaltalt:s22}% s_{\seidlMathIndex{22}}-(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\,/\,% \sqrt{(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})^2+\varepsilon^2}\;&=\;0 \end{align}$$ with some small constant $$\varepsilon^2\;\gt\;0\;\;\;.$$ What (\ref{hs87:equalitieswithabs:s11}) through (\ref{hs87:equalitieswithabsaltalt:s22}) have in common is that if any of the variables, $x_{\seidlMathIndex{1}}$ or $x_{\seidlMathIndex{2}}$, tends towards a discontinuity, then the appropriate equality loses significance. This stands to reason because only with such an insignificance the signum-type variables $s_{\seidlMathIndex{11}}$, $s_{\seidlMathIndex{21}}$ and $s_{\seidlMathIndex{22}}$ can behave capriciously and, hence, rapidly change their sign. On the other hand, if a signum-type variable can become nearly free, then it should at least be clamped, meaning that (\ref{hs87:bounds:s11}), (\ref{hs87:bounds:s21}) and (\ref{hs87:bounds:s22}) need actually be attached as inequality constraints. (\ref{hs87:equalitieswithabsalt:s11}), (\ref{hs87:equalitieswithabsalt:s21}) and (\ref{hs87:equalitieswithabsalt:s22}) are refused because of their unshielded denominators. (\ref{hs87:equalitieswithabsaltalt:s11}), (\ref{hs87:equalitieswithabsaltalt:s21}) and (\ref{hs87:equalitieswithabsaltalt:s22}) behave as (\ref{hs87:equalitieswithabsalt:s11}), (\ref{hs87:equalitieswithabsalt:s21}) and (\ref{hs87:equalitieswithabsalt:s22}), if $\varepsilon^2$ is very small, or exhibit precision problems if $\varepsilon^2$ is larger. The same is to say with respect to the behavior of the Jacobian and the Hessian near a discontinuity. With (\ref{hs87:equalitieswithabs:s11}), (\ref{hs87:equalitieswithabs:s21}) and (\ref{hs87:equalitieswithabs:s22}), one would not expect conspicuities when approaching, whereas, with (\ref{hs87:equalitieswithabsalt:s11}) through (\ref{hs87:equalitieswithabsaltalt:s22}), some degradation of convergence speed seems to be inevitable.
Next is to handle the $\seidlMathFunctionAbs()$ functions. To remove the $|\ldots|$ expressions from (\ref{hs87:equalitieswithabs:s11}), (\ref{hs87:equalitieswithabs:s21}) and (\ref{hs87:equalitieswithabs:s22}) in accordance with MPEC ideas, three new linear equalities with six further variables are introduced, all quite tricky. $$\begin{align} \label{hs87:equalities:y11}(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})-% \;\!y_{\seidlMathIndex{11+}}+\;\!y_{\seidlMathIndex{11-}}\;&=\;0\\ \label{hs87:equalities:y21}(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})-% \;\!y_{\seidlMathIndex{21+}}+\;\!y_{\seidlMathIndex{21-}}\;&=\;0\\ \label{hs87:equalities:y22}(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})-% \;\!y_{\seidlMathIndex{22+}}+\;\!y_{\seidlMathIndex{22-}}\;&=\;0 \end{align}$$ The new variables are enforced to be non-negative and well-behaving by means of the following twelve bounds. $$\begin{align} \label{hs87:inequalities:y11p}0\;\le\;y_{\seidlMathIndex{11+}}\;&\!\le\;% \max(u_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}},\,r_{\seidlMathIndex{11}}-\ell_{\seidlMathIndex{1}})\\ \label{hs87:inequalities:y11m}0\;\le\;y_{\seidlMathIndex{11-}}\;&\!\le\;% \max(u_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}},\,r_{\seidlMathIndex{11}}-\ell_{\seidlMathIndex{1}})\\ \label{hs87:inequalities:y21p}0\;\le\;y_{\seidlMathIndex{21+}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}},\,r_{\seidlMathIndex{21}}-\ell_{\seidlMathIndex{2}})\\ \label{hs87:inequalities:y21m}0\;\le\;y_{\seidlMathIndex{21-}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}},\,r_{\seidlMathIndex{21}}-\ell_{\seidlMathIndex{2}})\\ \label{hs87:inequalities:y22p}0\;\le\;y_{\seidlMathIndex{22+}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}},\,r_{\seidlMathIndex{22}}-\ell_{\seidlMathIndex{2}})\\ \label{hs87:inequalities:y22m}0\;\le\;y_{\seidlMathIndex{22-}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}},\,r_{\seidlMathIndex{22}}-\ell_{\seidlMathIndex{2}}) \end{align}$$ The point is now how the penalties appear. $$\begin{align} \label{hs87:penalties:y11}p_{\seidlMathIndex{11}}\;&=\;y_{\seidlMathIndex{11+}}\,y_{\seidlMathIndex{11-}}\\ \label{hs87:penalties:y21}p_{\seidlMathIndex{21}}\;&=\;y_{\seidlMathIndex{21+}}\,y_{\seidlMathIndex{21-}}\\ \label{hs87:penalties:y22}p_{\seidlMathIndex{22}}\;&=\;y_{\seidlMathIndex{22+}}\,y_{\seidlMathIndex{22-}} \end{align}$$ These penalties ensure that at least one member of each pair, consisting of $y_{\seidlMathIndex{i{}j+}}$ and $y_{\seidlMathIndex{i{}j-}}$, vanishes. If further any of the variables, $x_{\seidlMathIndex{1}}$ or $x_{\seidlMathIndex{2}}$, tends towards a discontinuity, then even both the members of the affected pair vanish. If on the other side at least one member of the pair always equals zero, then (\ref{hs87:equalities:y11}) through (\ref{hs87:equalities:y22}) can be interpreted as $$\begin{align} \label{hs87:absolutevalue:y11}|x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}}|% \;&=\;y_{\seidlMathIndex{11+}}+\,y_{\seidlMathIndex{11-}}\\ \label{hs87:absolutevalue:y21}|x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}}|% \;&=\;y_{\seidlMathIndex{21+}}+\,y_{\seidlMathIndex{21-}}\\ \label{hs87:absolutevalue:y22}|x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}}|% \;&=\;y_{\seidlMathIndex{22+}}+\,y_{\seidlMathIndex{22-}}\;\;\;. \end{align}$$ With (\ref{hs87:absolutevalue:y11}), (\ref{hs87:absolutevalue:y21}) and (\ref{hs87:absolutevalue:y22}) in hand, the $\seidlMathFunctionAbs()$ functions can finally be removed from (\ref{hs87:equalitieswithabs:s11}), (\ref{hs87:equalitieswithabs:s21}) and (\ref{hs87:equalitieswithabs:s22}) such that the latter take the form $$\begin{align} \label{hs87:equalitiesineffect:s11}% s_{\seidlMathIndex{11}}\,(\:\!y_{\seidlMathIndex{11+}}+\,y_{\seidlMathIndex{11-}})-% (x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\;&=\;0\\ \label{hs87:equalitiesineffect:s21}% s_{\seidlMathIndex{21}}\,(\:\!y_{\seidlMathIndex{21+}}+\,y_{\seidlMathIndex{21-}})-% (x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\;&=\;0\\ \label{hs87:equalitiesineffect:s22}% s_{\seidlMathIndex{22}}\,(\:\!y_{\seidlMathIndex{22+}}+\,y_{\seidlMathIndex{22-}})-% (x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\;&=\;0\;\;\;. \end{align}$$
How does our nonlinear program without jump discontinuities now look like ? (\ref{hs87:parameters:a}) through (\ref{hs87:changeoverparameters:r22}) are taken from the program with discontinuities, without any modification. The new objective function containing all the nine new variables here appears as $$\begin{align} f_{\seidlMathIndex{mp}}\;&=\;f_{\seidlMathIndex{m1}}+f_{\seidlMathIndex{m2}}\nonumber\\ &+\;\;\!y_{\seidlMathIndex{11+}}\,y_{\seidlMathIndex{11-}}\nonumber\\ &+\;\;\!y_{\seidlMathIndex{21+}}\,y_{\seidlMathIndex{21-}}\nonumber\\ \label{hs87:modifiedobjectivefunctionfin}&+\;\;\!y_{\seidlMathIndex{22+}}\,y_{\seidlMathIndex{22-}}\\ f_{\seidlMathIndex{m1}}\;&=\;% 30\,x_{\seidlMathIndex{1}}\,\big(1-s_{\seidlMathIndex{11}}\big)/\,2\nonumber\\ \label{hs87:modifiedobjectivefunctionfin:1}&+\;\;\!% 31\,x_{\seidlMathIndex{1}}\,\big(1+s_{\seidlMathIndex{11}}\big)/\,2\\ f_{\seidlMathIndex{m2}}\;&=\;% 28\,x_{\seidlMathIndex{2}}\,\big(1-s_{\seidlMathIndex{21}}\big)/\,2\nonumber\\ &+\;\;\!% 29\,x_{\seidlMathIndex{2}}\,\big(1+s_{\seidlMathIndex{21}}\big)\big(1-s_{\seidlMathIndex{22}}\big)/\,4\nonumber\\ \label{hs87:modifiedobjectivefunctionfin:2}&+\;\;\!% 30\,x_{\seidlMathIndex{2}}\,\big(1+s_{\seidlMathIndex{22}}\big)/\,2\;\;\;. \end{align}$$ The nine newly introduced degrees of freedom are reduced down to three ones by, firstly, three additional linear equalities, $$\begin{align} \label{hs87:equalitiesfin:y11}(x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})-% \;\!y_{\seidlMathIndex{11+}}+\;\!y_{\seidlMathIndex{11-}}\;&=\;0\\ \label{hs87:equalitiesfin:y21}(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})-% \;\!y_{\seidlMathIndex{21+}}+\;\!y_{\seidlMathIndex{21-}}\;&=\;0\\ \label{hs87:equalitiesfin:y22}(x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})-% \;\!y_{\seidlMathIndex{22+}}+\;\!y_{\seidlMathIndex{22-}}\;&=\;0\;\;\;, \end{align}$$ see therefor (\ref{hs87:equalities:y11}), (\ref{hs87:equalities:y21}) and (\ref{hs87:equalities:y22}), and, secondly, three additional quadratic equalities, $$\begin{align} \label{hs87:equalitiesfin:s11}% s_{\seidlMathIndex{11}}\,(\:\!y_{\seidlMathIndex{11+}}+\,y_{\seidlMathIndex{11-}})-% (x_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}})\;&=\;0\\ \label{hs87:equalitiesfin:s21}% s_{\seidlMathIndex{21}}\,(\:\!y_{\seidlMathIndex{21+}}+\,y_{\seidlMathIndex{21-}})-% (x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}})\;&=\;0\\ \label{hs87:equalitiesfin:s22}% s_{\seidlMathIndex{22}}\,(\:\!y_{\seidlMathIndex{22+}}+\,y_{\seidlMathIndex{22-}})-% (x_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}})\;&=\;0\;\;\;, \end{align}$$ see (\ref{hs87:equalitiesineffect:s11}), (\ref{hs87:equalitiesineffect:s21}) and (\ref{hs87:equalitiesineffect:s22}). The six new non-negatives are bounded by means of twelve inequalities, $$\begin{align} \label{hs87:inequalitiesfin:y11p}0\;\le\;y_{\seidlMathIndex{11+}}\;&\!\le\;% \max(u_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}},\,r_{\seidlMathIndex{11}}-\ell_{\seidlMathIndex{1}})% \;=\;300\\ \label{hs87:inequalitiesfin:y11m}0\;\le\;y_{\seidlMathIndex{11-}}\;&\!\le\;% \max(u_{\seidlMathIndex{1}}-r_{\seidlMathIndex{11}},\,r_{\seidlMathIndex{11}}-\ell_{\seidlMathIndex{1}})% \;=\;300\\ \label{hs87:inequalitiesfin:y21p}0\;\le\;y_{\seidlMathIndex{21+}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}},\,r_{\seidlMathIndex{21}}-\ell_{\seidlMathIndex{2}})% \;=\;900\\ \label{hs87:inequalitiesfin:y21m}0\;\le\;y_{\seidlMathIndex{21-}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{21}},\,r_{\seidlMathIndex{21}}-\ell_{\seidlMathIndex{2}})% \;=\;900\\ \label{hs87:inequalitiesfin:y22p}0\;\le\;y_{\seidlMathIndex{22+}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}},\,r_{\seidlMathIndex{22}}-\ell_{\seidlMathIndex{2}})% \;=\;800\\ \label{hs87:inequalitiesfin:y22m}0\;\le\;y_{\seidlMathIndex{22-}}\;&\!\le\;% \max(u_{\seidlMathIndex{2}}-r_{\seidlMathIndex{22}},\,r_{\seidlMathIndex{22}}-\ell_{\seidlMathIndex{2}})% \;=\;800\;\;\;, \end{align}$$ copied from (\ref{hs87:inequalities:y11p}) through (\ref{hs87:inequalities:y22m}) and evaluated. Finally, the three sign carriers are bounded in accordance with $$\begin{align} \label{hs87:boundsfin:s11}-1\;\le\;s_{\seidlMathIndex{11}}\;&\le\;+1\\ \label{hs87:boundsfin:s21}-1\;\le\;s_{\seidlMathIndex{21}}\;&\le\;+1\\ \label{hs87:boundsfin:s22}-1\;\le\;s_{\seidlMathIndex{22}}\;&\le\;+1\;\;\;, \end{align}$$ taken from (\ref{hs87:bounds:s11}) through (\ref{hs87:bounds:s22}).
The variables $x_{\seidlMathIndex{1}}$ through $x_{\seidlMathIndex{6}}$ are initialized in the same manner as for the nonlinear program with discontinuities, namely, applying (\ref{hs87:startingpoint}). The new non-negatives and sign carriers should be initialized such that (\ref{hs87:equalitiesfin:y11}) through (\ref{hs87:boundsfin:s22}) are fulfilled which is, with the knowledge that at least one of the non-negatives of each pair vanishes, quite easy. One gets $$\begin{align} \label{hs87:startingpoint:y11p}y_{\seidlMathIndex{11+}\:\seidlMathIndex{0}}&=\;90\\ \label{hs87:startingpoint:y11m}y_{\seidlMathIndex{11-}\:\seidlMathIndex{0}}&=\;0\\ \label{hs87:startingpoint:y21p}y_{\seidlMathIndex{21+}\:\seidlMathIndex{0}}&=\;900\\ \label{hs87:startingpoint:y21m}y_{\seidlMathIndex{21-}\:\seidlMathIndex{0}}&=\;0\\ \label{hs87:startingpoint:y22p}y_{\seidlMathIndex{22+}\:\seidlMathIndex{0}}&=\;800\\ \label{hs87:startingpoint:y22m}y_{\seidlMathIndex{22-}\:\seidlMathIndex{0}}&=\;0\\ \label{hs87:startingpoint:s11}s_{\seidlMathIndex{11}\:\seidlMathIndex{0}}&=\;1\\ \label{hs87:startingpoint:s21}s_{\seidlMathIndex{21}\:\seidlMathIndex{0}}&=\;1\\ \label{hs87:startingpoint:s22}s_{\seidlMathIndex{22}\:\seidlMathIndex{0}}&=\;1\;\;\;. \end{align}$$ That is it.
A straightforward method to treat jump discontinuities in nonlinear programs has been presented, manually converting TP 87 to achieve the test problem TP 87-2 out of the revised Hock & Schittkowski model collection. The applied technique is referred to as MPEC. So, to treat jump discontinuities, the modeler can basically choose between softening the problem spots by small transition domains, and, using MPEC. Transition domains are intuitive and conserve the model dimensions. The obtained solutions are sensible but may be inexact. Transition domains represent legal areas but are areas of strong curvatures. The solvers will react accordingly. MPEC does not have the latter disadvantages but, as seen with the present work, the models may get significantly blown up.