DOI QR코드

DOI QR Code

Dynamic Embedded Optimization Applied to Power System Stabilizers

  • Sung, Byung Chul (School of Electrical and Electronic Engineering, Yonsei University) ;
  • Baek, Seung-Mook (Division of Electrical, Electronic and Control Engineering, Kongju National University) ;
  • Park, Jung-Wook (School of Electrical and Electronic Engineering, Yonsei University)
  • 투고 : 2013.08.19
  • 심사 : 2013.10.01
  • 발행 : 2014.03.01

초록

The systematic optimal tuning of power system stabilizers (PSSs) using the dynamic embedded optimization (DEO) technique is described in this paper. A hybrid system model which has the differential-algebraic-impulsive-switched (DAIS) structure is used as a tool for the DEO of PSSs. Two numerical optimization methods, which are the steepest descent and Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithms, are investigated to implement the DEO using the hybrid system model. As well as the gain and time constant of phase lead compensator, the output limits of PSSs with non-smooth nonlinearities are considered as the parameters to be optimized by the DEO. The simulation results show the effectiveness and robustness of the PSSs tuned by the proposed DEO technique on the IEEE 39 bus New England system to mitigate system damping.

키워드

1. Introduction

Design processes are inherently optimizing problems, involving trade-offs between competing objectives, whilst ensuring constraints are satisfied. Such problems are not always established formally, nevertheless underlying optimization principles apply. Design questions arising from system dynamic behavior can also be thought of in an optimization framework. However, the optimization formulation in this case must capture the processes driving dynamics. This class of problems has come to be known as dynamic embedded optimization (DEO) [1]. For a typical disturbance, power system stabilizer (PSS) used to mitigate system damping of low-frequency oscillations is an important control objective for which the DEO technique can be applied.

The dynamic behaviors of the PSS are affected by the linear parameters (gain and time constant of phase compensator) with smoothness and the constrained parameter (output limits) with non-smooth nonlinearities. The appropriate selection of linear parameters has been usually made using the conventional tuning techniques [2-5] based on the small signal stability analysis. However, by focusing only small signal conditions, the dynamic damping performance immediately following a large disturbance is often degraded. The PSS output limits (which cannot be determined by the linear approach) can provide the solution to balance theses competing effects [6]. In particular, these limit values attempt to prevent the machine terminal voltage from falling below the exciter reference level while speed is also falling, which means that it can improve the reduced transient recovery after disturbance (faster recovery to its initial steady state points, therefore, it allows to save system energy), especially in multi-machine power systems [5].

Power systems frequently exhibit interactions between continuous-time dynamics, discrete-time, discrete-event dynamics, switching action, and jump phenomenon. Such systems are known generically as hybrid systems, which can be modeled by a set of differential-algebraicimpulsive- switched (DAIS) structure [67]. Especially, this hybrid system model provides the effective and insightful analysis of PSS with non-smooth nonlinear dynamics due to saturation limits. This paper makes the new contribution by jointly determining of the systematic optimal parameters [gain (KPSS), time constant (T1) of phase-lead compensator, and output limits (Vmax and Vmin)] of PSS shown in Fig. 1 using the hybrid system model based on the DAIS structure. The performance of the PSS tuned by the proposed DEO is assessed through its application to the IEEE 39 bus New England multi-machine power system in Fig. 2. In this paper, minimization of objective function used in the DEO is solved by the two numerical optimization methods: steepest descent method and Broyden-Fletcher-Goldfarb-Shanno (BFGS) [89] method, which is the most popular quasi-Newton algorithm. The efficiency of these two algorithms used in the DEO for the tuning of PSSs on the multi-machine power system (in Fig. 2) is investigated. Also, the starting point in an iterative optimization problem can give a significant effect on the efficiency and robustness of the algorithm even though the same method is used. The conventional tuning method based on the eigenvalue analysis may provide a good initial guess as the starting point of steepest descent and BFGS algorithms. Therefore, its usefulness is considered as an important factor for the DEO in this paper.

Fig. 1.PSS/AVR block representation

Fig. 2.IEEE 39 bus New England power system.

This paper is organized as follows: Section II presents a summary of the hybrid system model with the DAIS structure. The steepest descent and BFGS algorithms used as solver in the DEO are described in Section III. The detailed explanation of how to implement the DEO using the hybrid system model is given in Section IV. The dynamic damping performances of the PSSs tuned by proposed DEO technique are evaluated by the case studies in Section V, which also includes the comparisons of convergence speed by the steepest descent and BFGS algorithms. Finally, the conclusions are given in Section VI.

 

2. Hybrid System Presentation

2.1 Modeling

As mentioned in Section I, hybrid systems, which include power systems, are characterized by the following:

• Continuous and discrete states.• Continuous dynamics.• Discrete events or triggers.• Mappings that define the evolution of discrete states at events.

In other words, the hybrid system is a mathematical model of a physical process consisting of an interacting continuous and discrete event system [10]. This means that there are both continuous and discrete states in such systems that influence each other's behavior. It is shown in [7] that such behavior can be captured by the following DAIS structure.

where

and

• x are the continuous dynamic states, for example generator angles, speed, and fluxes.• z are discrete dynamic states, such as transformer tap positions and protection relay logic states.• y are algebraic states, e.g. load bus voltage magnitudes and angles.• λ are parameters such as generator reactance, controller gains, switching times, and limit values.

The differential equations in (1) are correspondingly structured for , whilst z and λ remain constant away from events. Similarly, the reset equations in (4) ensure that x and λ remain constant at reset events, but the dynamics states z are reset to new values according to z+ = hj(, y-) (The notation denotes the value of just after the reset event, while and y− refer the values of and y just prior to the event). The algebraic function g in (2) is composed of g(0) together with appropriate choices of g(i-) or g(i+), depending on the signs of the corresponding elements of yd in (3). An event is triggered by an element of yd changing sign and/or an element of ye in (4) passing through zero. In other words, at an event, the composition of g changes and/or elements of z are reset.

The system flows φ are defined accordingly as

The full detailed explanation and associated mathematical equations of the DAIS model (especially for the switching and impulse effects) are given in [7] with the comprehensive studies of the hybrid system.

2.2 Trajectory sensitivities

The flows φ in (5) of a system will generally vary with changes in parameters and/or initial conditions. Trajectory sensitivity provides a way of quantifying the changes in the flow that result from (small) changes to parameters and/or initial conditions. The development of these sensitivity concepts will be based on the DAIS model in (1)~(4). Trajectory sensitivities follow from a Taylor series expansion (neglecting higher order terms) of the flows and φy in (5), which can be expressed as

where Γx0 ∈ ℜṉ×ṉ and Γy ∈ ℜm×ṉ are partial derivatives of system flows and known as the trajectory sensitivities. Recall that incorporates parameters λ, therefore the sensitivities to initial conditions include parameter sensitivities.

The calculations in (6) and (7) can require the expensive computational efforts when the equations have high dimension for large systems. Fortunately, by using an implicit numerical integration technique such as trapezoidal integration, the computational burden for obtaining the trajectory sensitivities can be reduced considerably [6 7].

 

3. Numerical Optimization Methods

In engineering multivariable nonlinear problems, numerical optimization methods play the significant part to find the solutions of nonlinear functions on complex systems or select the parameters by which the objective function J can be minimized / maximized. The optimal tuning problem for the PSSs described in this paper is the case of the latter. The two representative first-order gradient-based methods [8 9], which are the steepest descent and BFGS quasi-Newton algorithms, are investigated. It is important to note that this first-order gradient-based information is already available by applying the trajectory sensitivities based on the DAIS structure described in Section II.

3.1 Steepest descent algorithm

The steepest descent method is the simple to implement. However, it is often slow to converge. The gradient of an objective function J at a point is the direction of the most rapid increase in the value of the function at that point. The descent direction is the negative of the gradient direction. The series of steps to be taken are given in below.

Algorithm-1: Steepest descent

Given starting point , N (number of iterations), ε1 , ε2 (positive stopping criteria), k←0;

while (k

Compute search direction . Set where αk is the step length. k ← k + 1;

end (while)

The more detailed explanation of the step length αk and functions ftol_1 and ftol_2 for the stopping criteria is given in the next Section IV.

3.2 BFGS quasi-newton algorithm

Like the steepest descent, the quasi-Newton methods [8 9] require only the first-order gradient of the objective function J to be supplied at each iterate. By measuring the changes in gradients, it provides the dramatic improvement on the convergence rate and robustness over the steepest descent, especially difficult problem. Moreover, because the second derivatives are not required, the quasi-Newton methods are sometimes more efficient than Newton's method. A further advantage of the quasi-Newton methods is that they provide an estimate of the Hessian (Recall that building the true Hessian is not feasible in practice since it involves the second order trajectories sensitivities, which are computationally expensive).

This estimated Hessian may provide an indication of coupling between design parameters λ, and hence allow physical insights that assist in the design process [1]. The most popular quasi-Newton algorithm is the BFGS method, named for its discoverers Broyden, Fletcher, Goldfarb, and Shanno. The BGFS algorithm is driven in below.

Algorithm-2: BFGS method

Given starting point , N (number of iterations), ε1, ε2 (stopping criteria), inverse Hessian approximation H0 (positive definite matrix), k← 0;

while (k

Compute search direction . Set where βk is the step length. Define and . Compute where I is an identity matrix and . k← k + 1;

end (while)

The descriptions of how to define the initial approximation H0 and step length βk are also presented in the next Section.

 

4. Implementation of the DEO

4.1 Test system

In this study, the ten machines 39 bus New England multi-machine power system in Fig. 2 is considered. The data of this system is given in [11]. Each machine has been represented by a fourth-order nonlinear model [12]. It is assumed that all generators (G1~G10) are equipped with the PSS/automatic voltage regulator (AVR) system shown in Fig. 1. The targeted parameters to be optimized are KPSS(i), T1(i), Vmax(i) and Vmin(i), i=G1, G2,…, G10 (of all PSSs), therefore the number of optimized parameters are total 40.

4.2 Objective function and minimization

Many practical optimization problems can be formulated using a Bolza form of the objective function J.

where λ are the optimized parameters (of total 40 in this study) that are adjusted to minimize the value of objective function J in (10), and tf is the final time. The objective of the PSSs tuning is to mitigate system damping and force the system to recover to the post-disturbance stable operating point as quickly as possible. The speed deviation (Δω) and terminal voltage deviation (ΔVt) of each generator are considered as good assessments of the damping and recovery. Therefore, the objective function in (10) can be re-formulated for the optimal tuning of PSSs with specific time tf as the following.

where the weighting matrix . The ωs and are the post-fault steady state values of ω and Vt, respectively. Note that the dependence of system responses ωi (λ,t) and Vt,i(λ,t) on the parameters λ is provided by the flows in (5). Also, the diagonal matrix with weighting factors W is determined by considering the balance of the conflicting requirements on the speed and voltage deviations.

4.3 Computation of gradient

Minimization of the value of J in (11) is straightforward even though the cost is obtained by integrating over the system flows (trajectories). The simplest way of obtaining J is to introduce a new state variable , with equal to the integrand of (11). Thereafter, The trajectory sensitivities with respect to λ directly provide the gradient.

Again, note that through the appropriate implementation of trajectory sensitivities described in Section II-B, the extra computational requirement in determining (12) is negligible.

4.4 Important factors of numerical methods

4.4.1 Stopping criteria

With the efficient computation of ∇J by (12), the two numerical optimization methods (steepest descent and BFGS algorithms) proposed in Section III are ready to be applied. It is now necessary to present the functions ftol_1 and ftol_2 (used for the stopping criteria ε1 and ε2), which evaluate the maximum relative gradient of J at in (13) and maximum relative change in successive values at in (14), respectively. The detailed descriptions for stopping criteria are given in [9].

4.4.2 Step length

In this study, the fixed step lengths αk and βk are used in the steepest descent and BFGS algorithms, respectively. The more efficient line search algorithms [8 9] can be used to determine the optimal step length, which will be reported in the authors' other publication.

The magnitude of order (O) of gradient at a starting point is observed to be different depending on the characteristic of parameters: i.e. the gradient of [KPSS, T1] with smoothness is the range of O(10-3), whilst the gradient of [Vmax, Vmin] with non-smoothness is the range of O(10-1 ~10-2). Therefore, for the steepest descent algorithm, the step length αk with the different scaling factors of 100 and 0.01 are used for the corresponding separate gradients of [KPSS, T1] and [Vmax, Vmin], respectively. Otherwise, the convergence of the steepest descent is too slow with a small single step length, or it can be diverged with a big single step length. On the other hands, for the BFGS algorithm, the unit step length of βk = 1 is used because it has the effective self-correcting property due to the Hessian approximation H for even the gradients with different magnitude of order.

4.4.3 Initial inverse hessian approximation in BFGS method

The initial inverse Hessian approximation H0 should be a positive definite matrix. The value of H0 affects on the effectiveness and robustness of overall algorithm. It is often set to some multiple μ⋅I of the identity, but there is no good general strategy for choosing μ [8]. Its value is determined experimentally by evaluating the convergence speed and robustness of the algorithm (the convergence will be slow by "too large" value and be failed by "too small" value of μ). The μ =100 is used in this study.

4.5 Starting point of numerical methods

The closed-form solution, which is a global minimum , is out of question. In other words, it is practically impossible to know if there is a global minimum of the objective function. The most numerical optimization algorithms at best can locate one local minimum, which is "close enough" to the ; usually this is good enough in practice. Therefore, the inability to determine the existence and uniqueness of the solution is not the primary concern in many practical applications [9]. The choice of starting point can give a significant effect on the efficiency and robustness of the overall algorithm. In this study, the conventional tuning method [2-5] based on the eigenvalue analysis is used to determine the starting point (so-called SP-EIG) as a good initial guess. For the DAIS model of hybrid system in (1) and (2), the eigenvalues can be computed from the system matrix A with the reduced order in (15).

To compare with the SP-EIG, another starting point (socalled SP-INI) by the same parameters (KPSS=2, T1=5, T2=0.05, and TW=10) for all PSSs of G1~G10 (in Fig. 2) is used as the one example of the worse starting point than the SP_EIG. Also, the output limits [Vmax, Vmin] of all PSSs are set to [0.1, -0.1] in both the SP-EIG and SP-INI. The parameters in the SP-EIG are shown in Table 1, which also presents the eigenvalues of the electromechanical mode (Δω) of G1~G10 by the SP-EIG and SP-INI.

It is observed from the Table 1 that the PSSs of all machines have the better damping ratios in the SP-EIG than the SP-INI. For the visual illustrations, the system responses (Δω) of G1 and G10 are shown in Figs. 3 and 4 when a three-phase short circuit fault is applied to the bus 39 in Fig. 2 at t=0.1 s.

Table 1.Parameters in SP_EIG and eigenvalues of electromechanical mode in all machines by SP_EIG and SP_TEMP.

Fig. 3.Damping performance of PSSs in SP-EIG and SPINI: speed deviation response Δω [rad/s] of G1.

Fig. 4.Damping performance of PSSs in SP-EIG and SPINI: speed deviation response Δω [rad/s] of G10.

 

5. Simulation Results

5.1 Convergence speed

The performance for convergence speed by the steepest descent and BFGS algorithms is compared in Figs. 5 and 6, which show the values of J variations during 20 iterations from the starting points SP-INI and SP-EIG, respectively.

Fig. 5.Value of objective function J variations from SP-INI.

Fig. 6.Value of objective function J variations from SP-EIG

It is clearly shown from the results in Figs. 5 and 6 that BFGS algorithm has the much faster convergence speed than the steepest descent algorithm. Not surprisingly, this fact proves that the BFGS is remarkably successful over the steepest descent for the DEO applied to the PSSs on a multi-machine power system, as known generally in the field of numerical optimization.

5.2 Damping performance

After 20 iterations from the SP-EIG, the values of J by the steepest descent and BFGS algorithms are 0.0269 and 0.0176, respectively. At this point, the corresponding damping performance by applying the same three-phase short circuit fault at the bus 39 in Fig. 2 is compared in Figs. 7 and 8.

Fig. 7.Comparison of damping performance by steepest descent and BFGS algorithms: speed deviation response Δω [rad/s] of G1.

Fig. 8.Comparison of damping performance by steepest descent and BFGS algorithms: speed deviation response Δω [rad/s] of G10.

The results (speed deviation responses, Δω of G1 and G10) in Figs. 7 and 8 show that the PSSs with the parameters (which is given in Table 2) optimized by the BFGS algorithm have the better damping performance for low-frequency oscillations than by the steepest descent algorithm.

Table 2.Optimized parameters by BFGS algorithm after 20 iterations from SP-EIG

5.3 Robustness of algorithms

With the same step lengths ( αk and βk ) and initial inverse Hessian approximation H0 as used before, the robustness of the algorithms is tested by a three-phase short circuit applied to a different location, which is the bus 16 in Fig. 2.

Starting from the SP-INI and SP-EIG, the values of J variations during 20 iterations are shown in Figs. 9 and 10, respectively. The BFGS algorithm still has the better convergence property than the steepest descent algorithm, especially with the SP-EIG.

Fig. 9.Value of objective function J variations from SPINI: test by a three-phase short circuit fault applied at the bus 16.

After 20 iterations by the steepest descent and BFGS algorithms as shown in Fig. 10, the values of J are 0.4940 and 0.4424, respectively. With the optimized parameters at this point, the damping performance by applying the same three-phase short circuit fault at bus 16 (in Fig. 2) is evaluated in Figs. 11 and 12, which show that the parameters optimized by the BFGS improves the overall system damping more effectively than the steepest descent method.

Fig. 10.Value of objective function J variations from SPEIG: test by a three-phase short circuit fault applied at the bus 16.

Fig. 11.Comparison of damping performance by steepest descent and BFGS algorithms: test by a threephase short circuit fault applied at the bus 16 (speed deviation response Δω [rad/s] of G1).

Fig. 12.Comparison of damping performance by steepest descent and BFGS algorithms: test by a threephase short circuit fault applied at the bus 16 (speed deviation response Δω [rad/s] of G10).

 

6. Conclusions

In this paper, the systematic optimal tuning of power system stabilizers (PSSs) on a multi-machine power system by the dynamic embedded optimization (DEO) technique was described. In addition to the linear parameters (gain and time constant of phase compensator) with smoothness, the output limits of the PSSs with the non-smooth nonlinearities are also considered as the objective to be optimized.

To implement the DEO technique applied for tuning of the PSSs (including the output limits), the hybrid system model based on the differential-algebraic-impulsiveswitched (DAIS) structure was used. From the trajectory sensitivities exploited in the hybrid system model with the DAIS structure, the gradients of the objective function J with respect to the parameters were computed.

Minimization of the value of J used in the DEO is solved by the two numerical optimization algorithms, which are the steepest descent method and Broyden- Fletcher-Goldfarb-Shanno (BFGS) algorithms. For the performance of convergence speed, the simulation results showed that the BFGS algorithm is superior to the steepest descent algorithm. Moreover, the optimized parameters of the PSSs tuned by the BFGS improved the system damping more efficiently than steepest descent. As a good initial guess for the starting point of the steepest descent and BFGS algorithms, the conventional tuning method based on the eigenvalue analysis was used to determine the initial parameters of the PSSs. This starting point was well matched with the two proposed numerical optimization algorithms.

참고문헌

  1. Ian A. Hiskens, Jung-Wook Park, and Vaibhav Donde, "Dynamic Embedded Optimization and Shooting Methods for Power System Performance Assessment", Chapter 9, pp. 179-199, in "Applied mathematics forderegulated electric power systems" edited by Joe H. Chow, Felix F. Wu, and James A. Momoh, Springer, 2005 (ISBN: 0-387-23470-5).
  2. P. Kundur, M. Klein, G. J. Rogers, and M. S. Zywno, "Application of Power System Stabilizers for Enhancement of Overall System Stability," IEEE Trans. on Power Systems, Vol. 4, No. 2, pp. 614-626, May 1989. https://doi.org/10.1109/59.193836
  3. M. Klein, G. J. Rogers, S. Moorty, and P. Kundur, "Analytical Investigation of Factors Influencing Power System Stabilizers Performance", IEEE Trans. on Energy Conversion, Vol. 7, No. 3, pp. 382-388, September 1992. https://doi.org/10.1109/60.148556
  4. N. Martins and L. T. G. Lima, "Determination of Suitable Locations for Power System Stabilizers and Static VAr Compensators for Damping Electro-mechanical Oscillations in Large Scale Power Systems", in Proc. of Power Industry Computer Application, pp. 74-82, May 1989.
  5. Prabha Kundur, "Power system stability and Control," EPRI Editors, McGraw-Hill, Inc. 1993, ISBN 0-07-035958-X.
  6. Seung-Mook Baek and Jung-Wook Park, "Hessian Matrix Estimation in Hybrid Systems Based on an Embedded FFNN", IEEE Transactions on Neural Networks, Vol. 21, No. 10, pp. 1533-1541, Oct. 2010. https://doi.org/10.1109/TNN.2010.2042728
  7. Ian A. Hiskens, "Trajectory Sensitivity Analysis of Hybrid Systems", IEEE Trans. on Circuits and Systems-Part I: Fundamental Theory and Applications, Vol.47, No.2, pp. 204-220, February 2000. https://doi.org/10.1109/81.828574
  8. J. Nocedal and S. J. Wright, Numerical Optimization, Springer-Verlag, New York, 1999.
  9. J. E. Dennis and Robert B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM, Philadelphia, 1996.
  10. M. S. Branicky, V. S. Borkar, and S. K. Mitter, "A unified framework for hybrid control: Model and optimal control theory", IEEE Trans. on Automat. Contr., Vol. 43, pp. 31-45, January 1998. https://doi.org/10.1109/9.654885
  11. M. A. Pai, Energy Function Analysis for Power System Stability. Norwell, MA: Kluwer, 1989.
  12. P. W. Sauer and M. A. Pai, Power System Dynamics and Stability. Englewood Cliffs, NJ: Prentice-Hall, 1998.