DOI QR코드

DOI QR Code

GLOBAL CONVERGENCE METHODS FOR NONSMOOTH EQUATIONS WITH FINITELY MANY MAXIMUM FUNCTIONS AND THEIR APPLICATIONS

  • Pang, Deyan (College of Mathematics, Qingdao University) ;
  • Ju, Jingjie (College of Mathematics, Qingdao University) ;
  • Du, Shouqiang (College of Mathematics, Qingdao University)
  • Received : 2014.05.11
  • Accepted : 2014.06.14
  • Published : 2014.09.30

Abstract

Nonsmooth equations with finitely many maximum functions is often used in the study of complementarity problems, variational inequalities and many problems in engineering and mechanics. In this paper, we consider the global convergence methods for nonsmooth equations with finitely many maximum functions. The steepest decent method and the smoothing gradient method are used to solve the nonsmooth equations with finitely many maximum functions. In addition, the convergence analysis and the applications are also given. The numerical results for the smoothing gradient method indicate that the method works quite well in practice.

Keywords

1. Introduction

By the widely used in the problems of image restoration, variable selection, stochastic equilibrium and optimal control, nonsmooth equations and their related problems have been widely studied by many authors(see[1-16]). In this paper, we consider the nonsmooth equations with finitely many maximum functions

where x ∈ Rn, fij : Rn → R are continuously differentiable functions, j ∈ Ji, i = 1, . . . , n, Ji, i = 1, . . . , n are finite index sets. This system of nonsmooth equations with finitely many maximum functions has specific application background, for instance, complementarity problems, variational inequality problems and many problems in national defense, economic, financial, engineering and management lead to this system of equations.(see for instance [9-10]). Obviously, (1) is a system of semismooth equations. For simplicity, we denote

Thus, the equations (1) can be briefly written as

The value function of F(x) is defined as

Then, (5) can be solved by solving the following problem

We consider using the iterative method for solving (6)

where αk > 0 is stepsize, dk is a search direction.

This paper is organized as follows. In Section 2, when f is smooth function, we present the steepest method for solving it and give its global convergence result. When f is a nonsmooth function, we call it a nondifferentiable problem. There are many papers (see for instance [4,7,8,12,13,14,15,16]) deal with this problem. we give the smoothing gradient method for solving it and give the convergence analysis. In Section 3, we discuss the applications of the methods, this further illustrated the system of nonsmooth equations with finitely many maximum functions is related to solve the optimization in theory. In the last section, we discuss the application of the method for the related minimax optimization. The numerical results are also given.

Notation. Throughout the paper, ∥.∥ denotes the l2 norm, R+ = {x|x ≥ 0, x ∈ R}, gk denote the gradient of f at xk.

 

2. The methods and their convergence analysis

Case(I). Firstly, when f is smooth function, we give the steepest method for solving it. The steepest method is one of the most used method for solving unconstrained optimization (One can see for [11]).

Method 2.1

Step 1. Choose σ1 ∈ (0, 0.5), σ2 ∈ (σ1, 1). Give initial point x0 ∈ Rn, Let k := 0.

Step 2. Compute gk = ∇f(xk), let dk = −gk, determine αk by Wolfe line search, where αk = max{ρ0, ρ1 . . .} and ρi satisfying

and

Set xk+1 = xk + αkdk.

Step 3. Let k := k + 1, go to step 2.

The global convergence of the Method 2.1 is given by the following theorem.

Theorem 2.1. Let {xk} generated by the Method 2.1. f(x) is lower bounded. For any x0 ∈ Rn, ▽f(x) is existence and uniformly continuous on the level set

Then we have

Proof. Suppose that the theorem is not true, then there exist a subsequence ( we still denote the index by k )such that

By dk is a descent direction and (7) , we can see that {f(xk)} is monotonically decreasing sequence. Since f(xk) is lower bounded. So the limitation of f(xk) is existence. Thus, we have

Set sk = αkdk. From (7), we know that

Due to the angle between dk and −gk is θk = 0. Then

Note that ∥gk∥ ≥ ε > 0, hence we must have ∥sk∥ → 0.

And because ∇f(x) is uniformly continuous on the level set, we have

That is

This contradiction with (8) and σ2 < 1. So we have

That is

Case (II). When f is locally Lipschitz continuous but not necessarily differentiable function. The generalized gradient of f at x is defined by

where ”conv” denotes the convex hull of set. Df is the set of points at which f is differentiable.

Firstly, we introduce the definition of smoothing function.

Definition 2.2 ([3]). Let f : Rn → R be continuous function. We call a smooth function of f, if is continuously differentiable in Rn for any fixed μ > 0 and

for any x ∈ Rn.

In the following, we present a smoothing gradient algorithm for (6).

Method 2.2

Step 1. Choose σ1 ∈ (0, 0.5), σ2 ∈ (σ1, 1) γ > 0 γ1 ∈ (0, 1), give a initial point x0 ∈ Rn, Let k := 0.

Step 2. Compute let dk = −gk, determine αk by the Wolfe line search, where αk = max{ρ0, ρ1 . . .} and ρi satisfying

and

Set xk+1 = xk + αkdk.

Step 3. if then set μk+1 = μk; otherwise, μk+1 = γ1μk.

Step 4. Let k := k + 1, go to Step 2.

Then, we give the convergence result of Method 2.2.

Theorem 2.3. Suppose that is a smoothing function of f. If for any fixed satisfies the conditions as in Theorem 2.1, then {xk} generated by Method 2.2 satisfies

and

Proof. Define K = {k|μk+1 = γ1μk}. If K is finite set, then there exists an interger such that for all

Then in step 3 of Method 2.2.

Since is a smoothing function, Method 2.2 reduces to solve

Hence, from the above Theorem 2.1, we can deduce that

which contradicts with (10). This show that K must be infinite. And we know

Since K is infinite, we can assume that K = {k0, k1, . . .}, where k0 < k1 < . . . Then we have

We get the theorem.

From above Theorem 2.3 and the gradient consistency discussion in [3,6], we can get the following result.

Theorem 2.4. Any accumulation point x* generated by Method 2.2 is a clarkr stationary point. This is

 

3. The applications of the methods

3.1. Application in solving generalized complementarity problem.

Consider the generalized complementarity problem (GCP) as in [5], Find a x ∈ Rn such that

where F = (F1, F2, . . . , Fn)T ,G = (G1G2, . . . ,Gn)T , Fi : Rn → R(i = 1, . . . , n) and Gi : Rn → R(i = 1, . . . , n) are continuously differentiable functions.

To solve (11) is equivalent to solve the following equations

By min(x, y) = x − (x − y)+, we know that (12) is equivalent to

Let ρ : R → R+ be a piecewise continuous density function satisfying

Let then for any fixed μ > 0, there is a continuous function

satisfying

From the definition of smoothing function, we know that ϕ(·, μ) is a smoothing function of (t)+.

Choose

Then

is a smoothing function of (t)+. Then, let t = Fi(x) − Gi(x), i = 1, . . . , n, we have

We know that the smoothing function of Fi(x) − (Fi(x) − Gi(x))+ is

So, we can transform (13) into

Then, we can use the Method 2.2 to solve (16).

3.2. Application in solving linear maximum equations.

Here, we consider the equations of maximum functions in [16]. Let F : R → R is a finite equations of maximum functions,

where fi : R → R is a affine linear,

where pi, qi ∈ R(i = 1, . . . , m,m ∈ N) are scalars. Follow the affine structure of F, we know that F is Lipschitz and convex. Generally assumption

And there exists −∞ = t1 < t2 < . . . < tm < tm+1 = ∞, such that

And

For the above linear affine equations of maximum functions, the smoothing function for the linear equations of maximum functions can be defined as follows. Let ρ : R → R is a piecewise continuous density function such that

and

We define a distribution function that goes with ρ by F ,i.e.,

Similar to [2], we can find the smoothing function F(t) of this special equations of maximum functions by convolution

For this linear affine finite equations of maximum functions

Using the above convolution, we can transform it into

and we can use the Method 2.2 to solve it.

 

4. Application in related minimax optimization problem

In this section, we consider the minimax optimization problem(see in [15])

where f(x) = maxi=1,...m f1(x). f1(x), . . . , fm(x) : Rn → R are twice continuous differentiable functions. Minimax problems are widely used in engineering design, optimal control, circuit design and computer-aided-design. Usually, minimax problems can be approached by reformulating them into smooth problems with constraints or by dealing with the nonsmooth objective directly.

In this paper, we also use the smoothing function (see for instance [15])

to approximate the function f(x) . In the following, we can see that using the Method 2.2 to solve the minimax optimization problem works quite well from the numerical result. We use the examples in [4]. All codes are finished in MATLAB 8.0. Throughout our computational experiments, the parameters used in the Method 2.2 are chosen as

In our implementation, we use ∥Δx∥ ≤ 10−5 as the stopping rule. x0 is the initial point, x* is the optimal value point, f(x*) is optimal value, k is the iterations.

Example 4.1 ([4]).

where

Table 4.1Numerical results for Example 4.1.

Example 4.2 ([4]).

where

Consider the following nonlinear programming problem as in [4].

Table 4.2Numerical results for Example 4.2.

Bandler and Charalambous (see [1]) proved that for sufficiently large αi, the optimum of the nonlinear programming problem coincides with the following minimax function:

where

Example 4.3 (Rosen-Suzuki Problem).

Here, we use

The numerical results for Example 4.3 are listed in Table 4.3.

Table 4.3Numerical results for Example 4.3.

References

  1. J.W. Bandler and C. Charalambous, Nonlinear programming using minimax techniques, Optim Theory Appl. 13 (1974),607-619. https://doi.org/10.1007/BF00933620
  2. J.V. Burke and T. Hoheisel and C. Kanzow, Gradient consistency for integral-convolution smoothing functions, Set-Valued and Variational Ana. 21 (2013), 359-376. https://doi.org/10.1007/s11228-013-0235-6
  3. X. Chen, Smoothing methods for nonsmooth, nonconvex minimization, Math. Program. 134 (2012), 71-99. https://doi.org/10.1007/s10107-012-0569-0
  4. C. Charalamous and A.R. Conn, An efficient method to solve the minimax problem directly, SIAM J.Numer.Anal. 15 (1978), 162-187. https://doi.org/10.1137/0715011
  5. X. Chen and L. Qi, A parameterized newton method and a quasi-newton method for nonsmooth equations, Comput. Optim. Appl. 3 (1994), 157-179. https://doi.org/10.1007/BF01300972
  6. X. Chen and W. Zhou, Smoothing nonlinear conjugate gradient method for image restoration using nonsmooth nonconvex minimization, SIAM J. Imaging Sci. 3 (2010), 765-790. https://doi.org/10.1137/080740167
  7. V.F. Demyanov and V.N. Molozenov, Introduction to minimax, Wiley, New York, 1974.
  8. D.Z. Du and P.M. Pardalos(eds), Minimax and applications, Kluwer Academic Publishers, Dordrecht, 1995.
  9. Y. Gao, Nonsmooth optimization, Science press, Beijing, 2008.
  10. Y. Gao, Newton methods for solving two classes of nonsmooth equations, Appl. Math. 46 (2001), 215-229. https://doi.org/10.1023/A:1013791923957
  11. C. Ma, Optimization method and its matlab program design, Science press, Beijing, 2010.
  12. E. Polak, On the mathematical foundations of nondifferentiable optimization, SIAM Review 29 (1987), 21-89. https://doi.org/10.1137/1029002
  13. E. Polak and J.E. Higgins and D.Q. Mayne, A barrier function method for minimax problems, Math. Program. 64 (1994), 277-294. https://doi.org/10.1007/BF01582577
  14. J.M. Peng and Z. Lin, A non-interior continuation method for generalized linear complementarity problems, Math. program. 86 (1999), 533-563. https://doi.org/10.1007/s101070050104
  15. S. Xu, Smoothing method for minimax problem, Comput. Optim. Appl 20 (2001), 267-279. https://doi.org/10.1023/A:1011211101714
  16. D. Zhu, Affine scaling interior levenberg-marquardt method for bound-constrained semismooth equations under local errror bound contions, Comput. Appl. Math. 219 (2008), 198-225. https://doi.org/10.1016/j.cam.2007.07.039