This is an extension to a post I wrote some years back Adding an optimizer to a nonlinear set of equations
I am looking at implementing the solution proposed by wyer33
In summary, I have a nonlinear system of $N$ equations in $N$ unknowns (the “Basic” model). This contains a set of fixed parameters, $p_k$. I solve this at the moment using a Newton method.
I then needed to add an optimizer to the system which would be able to vary a subset of the parameters to minimize or maximize some objective function. I implemented this at the time by using a nonlinear optimizer which calls the Basic model above as a function evaluation.
This does mean that the full Basic model is accurately converged for each optimization iteration, potentially resulting in $N_{opt}$ x $N_{bas}$ iterations in total. I questioned whether it might be possible to run both the optimizer and the Basic model at the same time, resulting in (much fewer) $N_{opt}$ iterations.
The method proposed by wyer33 uses a nonlinear optimizer with the Basic model as a set of nonlinear equality (to zero) constraints. I am looking at that again now and have a question:
Let's say I have implemented this in the manner wyer33 has suggested (optimizer with a square system of equality constraints = 0). I would still like to use this method in an unoptimized case (using the fixed set of parameters). So basically just use the optimizer to solve the nonlinear constraints and not optimize anything. What should the objective function of the optimizer be in this case?