I want to solve
$\min_x f(x) + g(x)$
where $f + g$ is convex and but, for example, $g$ is non convex. To make matters simpler, let us say that $g$ is concave.
If I use a splitting method, like ADMM, and assuming that I can compute the proximal operator for $f$ and $g$ exactly, can I guarantee that ADMM is going to converge to the optimum?
To be concrete, regarding using ADMM, what happens if we use the decomposition
$\min_{x,y,z} f(x) + g(y)$ subject to $x=z, y = z$
I know people have looked at non-convex ADMM, but this does not seem to be as hard as these problems. Are there any references for this type of problem?
I did a quick experiment with the objective $f(x) = x^2$ and $g(x) = -0.5(x-1)^2$. The minimum is $-1$. If I run ADMM, using the above decomposition, for $50$ iterations and plot the error versus $\rho$, the penalty parameter, I get the following plot
Sometimes it converges, other times it does not, depending on the tunnig
The Matlab code to produce the plot is
evol = []; for rho = 0.1:0.1:10 n_1 = randn; n_2 = randn; u_1 = randn; u_2 = randn; for i =1:50 x_1 = rho*n_1/(2 + rho); x_2 = (rho*n_2 - 1) / (rho - 1); z = 0.5*(x_1 + u_1 + x_2 + u_2); u_1 = u_1 + (x_1 - z); u_2 = u_2 + (x_2 - z); n_1 = z - u_1; n_2 = z - u_2; end evol = [evol, z]; end plot(0.1:0.1:10,log(abs(evol + 1)));