DOI QR코드

DOI QR Code

Robot Manipulator Visual Servoing via Kalman Filter- Optimized Extreme Learning Machine and Fuzzy Logic

  • Zhou, Zhiyu (School of Information Science and Technology, Zhejiang Sci-Tech University) ;
  • Hu, Yanjun (School of Information Science and Technology, Zhejiang Sci-Tech University) ;
  • Ji, Jiangfei (School of Information Science and Technology, Zhejiang Sci-Tech University) ;
  • Wang, Yaming (Lishui University) ;
  • Zhu, Zefei (School of Mechanical Engineering, Hangzhou Dianzi University) ;
  • Yang, Donghe (School of Information Science and Technology, Zhejiang Sci-Tech University) ;
  • Chen, Ji (School of Information Science and Technology, Zhejiang Sci-Tech University)
  • Received : 2021.12.15
  • Accepted : 2022.07.25
  • Published : 2022.08.31

Abstract

Visual servoing (VS) based on the Kalman filter (KF) algorithm, as in the case of KF-based image-based visual servoing (IBVS) systems, suffers from three problems in uncalibrated environments: the perturbation noises of the robot system, error of noise statistics, and slow convergence. To solve these three problems, we use an IBVS based on KF, African vultures optimization algorithm enhanced extreme learning machine (AVOA-ELM), and fuzzy logic (FL) in this paper. Firstly, KF online estimation of the Jacobian matrix. We propose an AVOA-ELM error compensation model to compensate for the sub-optimal estimation of the KF to solve the problems of disturbance noises and noise statistics error. Next, an FL controller is designed for gain adaptation. This approach addresses the problem of the slow convergence of the IBVS system with the KF. Then, we propose a visual servoing scheme combining FL and KF-AVOA-ELM (FL-KF-AVOA-ELM). Finally, we verify the algorithm on the 6-DOF robotic manipulator PUMA 560. Compared with the existing methods, our algorithm can solve the three problems mentioned above without camera parameters, robot kinematics model, and target depth information. We also compared the proposed method with other KF-based IBVS methods under different disturbance noise environments. And the proposed method achieves the best results under the three evaluation metrics.

Keywords

1. Introduction

With the advance in industrial automation, the demand for robots in production and life is increasing, which promotes the development of the robot industry. Industrial robots liberate the labor force from dangerous and repetitive production. Service robots are constantly changing the way people live. With the improvement of hardware performance, robot performance has been dramatically improved, and the application scenarios of robots are richer. Industrial robots are the most important part of the traditional robot market. In assembly, handling, welding, and other repetitive boring labor, industrial robots are replacing human beings in a large area. With the acceleration of population aging, the research on service robots is more and more valued by many robot research and development companies. Robots play an important role in all industries, such as sorting and handling robots in the logistics industry; Underwater robots for deep-sea exploration.

In the field of robot vision, the research of robot arm visual servo systems has a wide range of application scenarios. The vision servo of the manipulator is to study the cooperative control method between the end-effector and vision device. The realization of manipulator visual servo system mainly includes system modeling and model identification [1-3]. Estimation by image Jacobian matrix is a common strategy for visual servo. The image Jacobian matrix correlates the robot's visual features with its pose changes. According to the change of visual features, the change of the pose of the manipulator can be obtained, so as to control the movement of the manipulator to the desired position in real-time. Robot manipulator IBVS is a real-time control system, which mainly includes target image acquisition, image processing, target feature extraction, image Jacobian matrix estimation, feedback control quantity calculation, and motion control. Considering the real-time requirements of the system, these processing procedures must be completed within a specific time. To ensure real-time performance, reduce computing time and optimize system response speed, the image Jacobian matrix estimation method should be simple.

To realize the conversion between pixel coordinates and actual coordinates in common robot visual servoing, calibration must be performed first. For the realization of visual servo control, the calibration here includes not only the camera calibration but also the hand-eye calibration of the robot system [4]. Because the visual servo calibration needs a lot of accurate prior information, the robustness of the system is poor. So, uncalibrated visual servoing is used in [5]. Uncalibrated visual servoing refers to a control method that uses images to calibrate the model and parameters directly by studying the control law of the driving robot motion in advance or under the condition of the robot kinematic model and system error so that the system converges to the allowable error. In this field, uncalibrated visual servo saves the tedious calibration process and has great advantages in control efficiency, application convenience, and performance. However, there are three problems in uncalibrated environments: the perturbation noises of the robot system, error of noise statistics, and slow convergence. So, in this paper, we study the image Jacobian matrix problem in uncalibrated visual servoing. A novel IBVS strategy, which associates KF with AVOA-ELM and FL techniques, is presented, and it does not need camera calibration or depth information of the target. Unlike the classical IBVS scheme, the IBVS method we proposed does not require camera parameters and robot inverse kinematics-related parameters. In addition, the proposed online Jacobian matrix estimation of IBVS can be used without the depth information of the target. The main contributions of this article are as follows:

(1) The KF algorithm provides the optimal minimum variance estimation when the initial state or initial state error and all system noises satisfy the Gaussian assumption. However, in practice, the robot system exhibits a non-Gaussian disturbance error, and the noise statistical error is unknown. Therefore, a very fast learning algorithm called AVOA-ELM is trained for Jacobian online estimation.

(2) The convergence rate of the KF-based IBVS is slow. To address this problem, we use an FL unit based on experience to estimate the optimal control rate for the velocity controller in a loop. We take the norm of the image feature error, the norm of the derivative of the characteristic error, and the norm of the joint angle as the input of the FL controller. By applying the FL unit of the adaptive control rate, the cost time of the visual servo control is reduced, and the convergence time of the image characteristic error is accelerated.

The rest of this article is as follows: Section 2 introduces the research of visual servo. Section 3 introduces the model of using the KF algorithm to estimate the Jacobian matrix of the image. We propose a method to estimate the Jacobian matrix of the image online using the KF-AVOA-ELM state estimator in Section 4. Section 5 introduces the IBVS system based on the proposed visual servoing scheme combining FL and KF-AVOA-ELM (FL-KF-AVOA-ELM). The simulation results are given in section 6 and the conclusion is given in section 7.

2. Related work

According to the type of visual feedback information [6-10], the visual servo divides into position-based visual servoing (PBVS), image-based visual servoing (IBVS) [11], and hybrid-based visual servoing (HBVS). Another classification is based on the location of the camera. According to the camera's position relative to the manipulator [12], the visual servo classifies into the eye-to-hand (ETH) model and the eye-in-hand (EIH) model.

2.1 Position-based visual servoing

The PBVS is applied to many robot models with global cameras. In the entire control process, the 3D pose is estimated from camera images. Here, we note that the problem of pose estimation can be converted into a state estimation problem. In [13], a real-time PBVS control algorithm was proposed for an EIH model. The initial state of the extended Kalman filter (EKF) corresponded to the initial pose of the end effector, which was obtained by photogrammetry. The optical flow algorithm was used to track the target, which not only reduces the joint motion delay but also improves the real-time attitude and velocity estimation of the non-cooperative target. The defects of this algorithm were the EKF requirement to acquire the statistical characteristics of the noise and the need for a sufficiently high sampling frequency. For uncalibrated hand-eye systems, the PBVS is sensitive to the depth information of the target and the accuracy of camera calibration. These defects can lead to image features being left out of the field-of-view (FOV) of the camera along with target tracking failure. Yaozhen He et al. [14] proposed a deep learning-based visual servoing method. They made the original baseline obtain better pose prediction by using a kind of new training strategy. However, their proposed method does not perform well in practical scenarios due to the tendency of the model to overfit.

2.2 Image-based visual servoing

The IBVS uses image information as a control signal for the objective function without 3D reconstruction. The IBVS control signal is not dependent on the camera and robot system parameters, which makes it more robust to camera errors and more suitable for uncalibrated visual servoing. An IBVS control algorithm usually depends on the robot system and camera parameters. The calibration accuracy is very important for the IBVS system [15]. However, the calibration is very complex and costly. So, we need an IBVS control algorithm without parameters calibration. In this regard, in [16], a stable adaptive visual servo scheme is proposed. This method combines motion control and visual servo based on the SDU decomposition method to solve the problem of inadequate calibration of camera robot system parameters. However, the method based on kinetic motion needs a high sampling frequency and has complex real-time requirements for calculating the force and torque, which results in high computational complexity. Meanwhile, Wang et al. [17] put forward an IBVS system with dual cameras. Here, the web camera was based on the EIH model and used for target tracking. Simultaneously, the object depth information is acquired by binocular vision in real time. Further, the adaptive gain of visual servoing, which is based on image error feedback, was used to improve the velocity of the controller. However, the system is required to process two images in a cycle, which results in poor real-time processing performance.

To realize a satisfactory IBVS system, a mapping model needs to be established between the image feature space and the robotic arm space. At present, the image Jacobian matrix model is widely used for this purpose. For uncalibrated hand-eye control tasks, the image Jacobian matrix varies according to image characteristics, and it is difficult to obtain the actual Jacobian matrix. Hence, the determination of a valid method for online estimation of the matrix becomes necessary. Analytic and numerical methods form the main methods for image Jacobian matrix estimation. The analytic method depends on the camera and robot models, and it is very sensitive to camera error and manipulator model error. The method also has high computational complexity. On the other hand, the numerical method estimates the image Jacobian matrix as a whole via the use of state estimation algorithms such as the Kalman filter (KF) and particle-filter algorithms. In recent years, Tolga Yüksel [18] proposed an IBVS using an ELM and fuzzy logic. First, the method uses a trained ELM to avoid the singularity of the interaction matrix. Second, the method also uses a smooth adaptive gain based on control rate and fuzzy logic to improve the convergence speed of IBVS. Finally, the method uses the FL unit to keep the field of view (FOV). Qian et al. [19] proposed to use the Kalman-Bucy filter (KBF) to predict the Jacobian matrix of images, wherein the image Jacobian matrix is transformed into the state matrix of the KBF. However, However, this method does not perform well in the environment with unknown non-Gaussian noise. In order to improve system adaptability for noises, a newly proposed KF method uses fuzzy logic (FL) to adjust the co-variance matrices Q and R [20]. This method improves the estimation accuracy. The drawbacks of this method include the difficulty in determining the increment in the filter parameters and its unsuitability for dynamic unknown environments; further, the controller has no unified design standard, which increases the difficulty of practical application. Meanwhile, certain studies have estimated the image Jacobian matrix using intelligent algorithms. In [21], the Kalman filter put forward to improve the stability of the algorithm against noise perturbation. However, the convergence of the method was very slow, and the trajectory of the end effector was not sufficiently smooth. H. A. Junaid designed a four-layer artificial neural network for training between the image feature space and manipulator motion space [22], which reduced the IBVS system's calculation time. However, the disadvantage of this method was that the neural network needed to reacquire the training samples and train for different robot models. The approach trades training time with prediction time, and the neural network also has the problem of under-fitting and over-fitting. Miljković et al. [23] proposed to use reinforcement learning Q-learning and SARSA algorithms to train the mapping model between the image feature space and the robot motion space. However, the velocities had obvious oscillations. Maxwell Hwang et al. [24] reduces the computational complexity of image matrix pseudo-inverse and improves the efficiency of IBVS. At the same time, this method also reduces the impact on system noise and improves the stability of IBVS.

2.3 Hybrid-based visual servoing

As the name implies, hybrid-based visual servoing is a kind of visual servo that combines IBVS and PBVS. This method usually performs a preliminary alignment through a positionbased method, followed by a precise approximation through an image-based method. However, there is an important problem for HBVS to solve the homography matrix, which is too complicated. Oualid Araar et al. [25] proposed an HBVS for the translational kinematics of a Vertical Takeoff and landing (VTOL) vehicle. It combines the robustness of IBVS with the global stability of PBVS and decides which strategy to use by switching. Gossaye Mekonnen et al. [26] put forward a novel hybrid control method for the visual servoing of mobile robots. Among them, the position-based method is used to achieve global routing, and the image-based method is used to achieve precise navigation.

3. Kalman estimation model for image Jacobian

The image feature error is defined as follows:

𝑒𝑆(𝑡) = 𝑆(𝑝𝑖(𝑡), 𝑎) − 𝑆*       (1)

where 𝑆(𝑝𝑖(𝑡), 𝑎) and 𝑆* represent the current image feature and the expected image feature, respectively. The parameter 𝑝𝑖(𝑡) represents the coordinates of n feature points and 𝑎 represents a parameter set that obtains the intrinsic parameters of the camera (such as focus and pixel size).

In IBVS, the relationship between image features and camera velocity is expressed as follows:

\(\begin{aligned}\dot{S}=L_{S} \dot{\xi}\end{aligned}\)       (2)

where 𝐿𝑆 ∈ ℜ𝑛×6 denotes the interaction matrix of 𝑆, \(\begin{aligned}\dot{S}\end{aligned}\) is the time derivative of the features, and \(\begin{aligned}\dot{\xi}\end{aligned}\) is the velocity of the camera. Subsequently, we have

\(\begin{aligned}\dot{e}_{S}(t)=L_{e} \dot{\xi}\end{aligned}\)       (3)

where 𝐿𝑒 = 𝐿𝑆. Let us design a velocity controller for the IBVS system and attempt to decrease the error by exponential decay, i.e., \(\begin{aligned}\dot{e}=-\lambda e\end{aligned}\). We then have

\(\begin{aligned}\dot{\xi}=-\lambda L_{e}^{\dagger} e_{S}(t)\end{aligned}\)       (4)

Here, 𝐿𝑒 ∈ ℜ6×𝑘 denotes the Moore–Penrose pseudo-inverse matrix of 𝐿𝑒.

In fact, the depth information of features is difficult to estimate. Since the actual value of 𝐿𝑒 or 𝐿𝑒 depends on the depth of the feature. So, we can't get them. Thus, we need to obtain an approximation \(\begin{aligned}\hat{L}_{e}^{\dagger}\end{aligned}\), and the velocity controller consequently becomes

\(\begin{aligned}\dot{\xi}=-\lambda \hat{L}_{e}^{\dagger} e_{S}(t)=-\lambda \hat{L}_{e}^{\dagger}\left(S-S^{*}\right)\end{aligned}\)        (5)

Define the joint angle vector of the m degrees of freedom (DOFs) manipulator as 𝑞 = [𝑞1, ⋯ , 𝑞𝑚]𝑇, then the joint velocity vector is 𝑞̇ = [𝑞̇1, ⋯, 𝑞̇𝑚]T. The relationship between the joint velocity of the manipulator and the end-effector velocity is

\(\begin{aligned}\dot{\xi}=J(q) \dot{q}\end{aligned}\)       (6)

where 𝐽(𝑞) denotes the robot Jacobian matrix. The relationship between the image feature error rate of change and the angular velocity of the robotic arm joint is

\(\begin{aligned}\dot{S}=J_{q} \cdot \dot{q}\end{aligned}\)       (7)

where 𝐽𝑞 = 𝐿𝑒 ⋅ 𝐽(𝑞) denotes the image Jacobian matrix. Consequently, the joint velocity controller according to Eqs. (5), (6), and (7) is defined as

\(\begin{aligned}\dot{q}=-\lambda J_{q}^{\dagger} e_{S}(t)\end{aligned}\)       (8)

where 𝐽𝑞 represents the Moore–Penrose pseudo inverse matrix of 𝐽𝑞.

In order to address the problems of classical IBVS, Qian et al. [19] established the KF model to estimate 𝐽𝑞 in Eq. (7), 𝐽𝑞 ∈ ℜ𝑛×𝑚 is described as

\(\begin{aligned}J_{q}(q)=\left[\frac{\partial S}{\partial q}\right]=\left[\begin{array}{ccc}\frac{\partial S_{1}(q)}{\partial q_{1}} & \cdots & \frac{\partial S_{1}(q)}{\partial q_{m}} \\ \vdots & \ddots & \vdots \\ \frac{\partial S_{n}(q)}{\partial q_{1}} & \cdots & \frac{\partial S_{n}(q)}{\partial q_{m}}\end{array}\right]_{n \times m}=\left[\begin{array}{ccc}j_{11} & \cdots & j_{1 m} \\ \vdots & \ddots & \vdots \\ j_{n 1} & \cdots & j_{n m}\end{array}\right]_{n \times m}\end{aligned}\)       (9)

The estimation problem of the image Jacobian matrix is a very important problem in IBVS. KF is the best linear state estimation algorithm with independent Gaussian white noises [26,27]. The image Jacobian matrix estimation problem can be transformed into the KF state estimation problem. The state and observation models, respectively, are given as

𝑋𝑡+1/𝑡 = 𝐸𝑋𝑡/𝑡 + 𝑊𝑡       (10)

𝑍𝑡+1 = 𝐻𝑡+1𝑋𝑡+1/𝑡 + 𝑉𝑡+1       (11)

where 𝑊𝑡 ∈ ℜnm, 𝑉𝑡 ∈ ℜ𝑛 denote process noises and observation noises (which have zero mean) and whose co-variances are 𝑄(𝑡) and 𝑅(𝑡), respectively. Q(t) is an n × m dimensional matrix, and R(t) is an m-dimensional matrix. 𝑋𝑡/𝑡 is the state vector of the robot and is formed by concatenations of the row vectors of 𝐿𝑠.

𝑋𝑡/𝑡 = [𝑗11, 𝑗12, . . .𝑗nm](𝑛∗𝑚)×1T       (12)

In Eq. (11), 𝑍𝑡+1 ∈ ℜ𝑛 denotes the observation vector at the current instant.

𝑍𝑡+1 = 𝑆𝑡+1 − 𝑆𝑡 = 𝐽𝑞 ⋅ 𝑞̇(𝑡)        (13)

Thus, the observation matrix 𝐻𝑡+1 is defined as

\(\begin{aligned}H_{t+1}=\left[\begin{array}{ccc}\dot{q}(t)^{T} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \dot{q}(t)^{T}\end{array}\right]_{n \times(n * m)}\end{aligned}\)       (14)

According to Eqs. (10) and (11), we set up the following recursive expressions:

1. Prediction step:

\(\begin{aligned}\left\{\begin{array}{c}X_{t+1 / t}=E X_{t / t} \\ P_{t+1 / t}=H_{t} P_{t / t} H_{t}+Q_{t}\end{array}\right.\end{aligned}\)       (15)

2. Update step:

𝐾𝑡+1 = 𝑃𝑡+1/𝑡𝐻𝑡+1𝑇(𝐻𝑡+1𝑃𝑡+1/𝑡𝐻𝑡+1𝑇 + 𝑅𝑡+1)−1

𝑋𝑡+1/𝑡+1 = 𝑋𝑡+1/𝑡 + 𝐾𝑡+1(𝑍𝑡+1 − 𝐻𝑡+1𝑋𝑡+1/𝑡)

𝑃𝑡+1/𝑡+1 = (𝐸 − 𝐾𝑡+1𝐻𝑡+1)𝑃𝑡+1/𝑡       (16)

where, the first line of the update step is used to calculate Kalman gain, and (𝐻𝑡+1𝑃𝑡+1/𝑡𝐻𝑡+1𝑇 + 𝑅𝑡+1)−1 represents the uncertainty of observation when the state is observed. The second line of the update step is used to update the state based on the observed information, and in the third line, the update step is used to calculate the variance matrix of the updated state. As can be observed from the prediction and update steps, the use of the KF method to predict the Jacobian matrix of an image has obvious defects. When the noise is Gaussian white noise, the KF algorithm provides an optimal estimate. However, in actual environments, the KF algorithm is sensitive to the statistical characteristics of noises generated during robot motion and introduced by the visual sensor. Therefore, in the next section, we will use the KF-AVOA-ELM method to estimate the image Jacobian matrix.

4. KF with AVOA-ELM for dynamic Jacobian estimation

4.1 Extreme Learning Machine

ELM [29,30] random choices input weights and bias, and then regulates the output weights with regularization to avoid the problem of multiple iterations during training, speeding up the learning speed of the network while still being able to approach any continuous system. The output function of ELM is [31]

𝑓ELML(𝑥) = ∑i=1l𝛽𝑖𝑖(𝑤𝑖, 𝑏𝑖, 𝑥) = ℎ(𝑤, 𝑏, 𝑥)𝛽       (17)

where 𝛽 = [𝛽1, ..., 𝛽𝐿]𝑇 represents the hidden-layer weight vector. Further, ℎ(𝑤, 𝑏, 𝑥) = [ℎ1(𝑤1, 𝑏1, 𝑥), ..., ℎ𝐿(𝑤𝐿, 𝑏𝐿, 𝑥)] denotes the relationship function between the hidden-layer input and output. Minimize the training error and minimize the modulus of the output weight vector as follows:

\(\begin{aligned}\|H(w, b, x) \hat{\beta}-T\|=\min _{\beta}(\| H(w, b, x) \beta-T)\end{aligned}\)       (18)

\(\begin{aligned}\|\hat{\beta}\|=\min _{\beta}\|\beta\|\end{aligned}\)       (19)

Here, \(\begin{aligned}\hat{\beta}=H^{\dagger} T\end{aligned}\) denotes the solution of 𝐻𝛽 = 𝑇 determined via a least-squares method and 𝐻 is the Moore–Penrose generalized inverse matrix of 𝐻 [32]. The ELM also has existing input layer weights 𝑤𝑖 = [𝑤𝑖1, ... 𝑤𝑖𝐿] ∈ ℜ𝐿, ∀𝑖 ∈ {1, ..., 𝑛}, and the bias is b.

4.2 African Vultures Optimization Algorithm

The AVOA [33] is divided into the following stages: (1) Determine the best vulture in any group, this stage selects the best solution for each group of vultures. (2) Calculate the hunger rate of vultures, and judge whether the algorithm is in the exploration stage or the development stage based on the hunger rate. (3) Simulate the exploratory movement of vultures looking for food. (4) Different development strategies are selected according to different parameters. There is a complete description in the literature [33].

4.3 African Vultures Optimization Algorithm - Extreme Learning Machine

Due to the input weight and bias of ELM are randomly selected, they cannot give full play to the best advantages of ELM. On this basis, we propose to use the AVOA algorithm to optimize the two parameters of ELM. In the AVOA algorithm, the prediction accuracy of ELM (we use RMSE) is used as the fitness of AVOA, and the best output of AVOA is used as the input weight and bias of the ELM algorithm. The performance of the AVOA algorithm has been proved in the literature [33] by the researcher who proposed, that the computational complexity and running time of the AVOA algorithm are superior to most meta-heuristic algorithms. In addition, the prediction accuracy of the AOVA algorithm also shows better performance. Therefore, we use the AVOA algorithm to find the optimal input weight and bias for ELM, which can greatly improve the prediction accuracy of ELM algorithm. We will compare the prediction performance of the AVOA-ELM algorithm in the experimental part. The flowchart of the AVOA-ELM algorithm is shown in Fig. 1.

E1KOBZ_2022_v16n8_2529_f0001.png 이미지

Fig. 1. AVOA-ELM frame.

4.4 KF-AVOA-ELM Algorithm

This section puts forward to online estimation of image Jacobian matrix using KF-AVOA-ELM. The original KF algorithm provides an optimal state estimate for a linear time-invariant system when the noise is known to be Gaussian white noise. If the observed vector strictly follows the models represented by Eqs. (10) and (11), the system will acquire the optimal estimation of the state vector. Because of the nonlinear error of the state model and observation model, the state estimation is sub-optimal. To get the optimal estimation, the error compensation is given as [21]:

\(\begin{aligned}\widehat{X}_{t / t}^{\prime}=\widehat{X}_{t / t}+e_{\hat{X}_{t / t}}\end{aligned}\)       (20)

where, \(\begin{aligned}\widehat{X}_{t / t}\end{aligned}\) denotes the sub-optimal state obtained by the KF, \(\begin{aligned}\widehat{X}_{t / t}^{\prime}\end{aligned}\) the optimal state, and \(\begin{aligned}e_{\hat{X}_{t / t}}\end{aligned}\) the state error of the KF arising from the model errors, process noises, and observation noises. \(\begin{aligned}e_{\hat{X}_{t / t}}\end{aligned}\) is output by the AVOA-ELM model. The inputs of the AVOA-ELM as follows:

\(\begin{aligned}e_{\widetilde{K}_{t / t-1}}=K_{t}-K_{t-1}\end{aligned}\)       (21)

\(\begin{aligned}e_{\tilde{X}_{t / t-1}}=\hat{X}_{t}-\hat{X}_{t-1}\end{aligned}\)       (22)

\(\begin{aligned}e_{\tilde{Z}_{t / t-1}}=Z_{t}-H_{t} \hat{X}_{t-1}\end{aligned}\)       (23)

The three inputs correspond to model errors, process noises, and observation noises of KF, respectively. The proposed scheme is presented as Algorithm 1. In next the section, we present our design of a suitable controller to realize a novel IBVS based on FL-KF-AVOA-ELM.

Algorithm 1: KF-AVOA-ELM for dynamic Jacobian estimation

Step 1: Input the image Jacobian matrix at the last instant \(\begin{aligned}\hat{X}_{t / t}^{\prime}\end{aligned}\), covariance matrix 𝑃𝑡/𝑡, and Kalman gain 𝐾𝑡.

Step 2: Obtain current observation value for image feature 𝑍𝑡+1.

Step 3: Estimate the next state variable and covariance matrix:

\(\begin{aligned}\left\{\begin{array}{c}X_{t+1 / t}=E X_{t / t} \\ P_{t+1 / t}=H_{t} P_{t / t} H_{t}+Q_{t}\end{array}\right.\end{aligned}\)

Step 4: Calculate Kalman gain 𝐾𝑡+1 and update state variable 𝑋𝑡+1/𝑡+1 and covariance matrix 𝑃𝑡+1/𝑡+1.

\(\begin{aligned}\left\{\begin{array}{c}K_{t+1}=P_{t+1 / t} H_{t+1}^{T}\left(H_{t+1} P_{t+1 / t} H_{t+1}^{T}+R_{t+1}\right)^{-1} \\ X_{t+1 / t+1}=X_{t+1 / t}+K_{t+1}\left(Z_{t+1}-H_{t+1} X_{t+1 / t}\right) \\ P_{t+1 / t+1}=\left(E-K_{t+1} H_{t+1}\right) P_{t+1 / t}\end{array}\right.\end{aligned}\)

Step 5: Output: \(\begin{aligned}\hat{X}_{t+1 / t+1}=X_{t+1 / t+1}\end{aligned}\), 𝐾𝑡+1, 𝑃𝑡+1/𝑡+1.

Step 6: Calculate the gain error \(\begin{aligned}e_{\widetilde{K}_{t+1 / t}}\end{aligned}\) via \(\begin{aligned}e_{\widetilde{K}_{t+1 / t}}\end{aligned}\).

Step 7: Calculate the estimation error \(\begin{aligned}e_{\tilde{X}_{t+1 / t}} \operatorname{via} e_{\tilde{X}_{t+1 / t}}=\widehat{X}_{t+1}-\hat{X}_{t}\\\end{aligned}\).

Step 8: Calculate the observation error \(\begin{aligned}e_{\tilde{Z}_{t+1 / t}} \operatorname{via} e_{\tilde{Z}_{t+1 / t}}=Z_{t+1}-H_{t+1} \hat{X}_{t}\\\end{aligned}\).

Step 9: Use the AVOA-ELM model proposed in Section 3.3 to calculate the KF state estimation error \(\begin{aligned}e_{\hat{X}_{t+1 / t+1}}\end{aligned}\).

Step 10: Calculate the optimal estimation image Jacobian matrix as \(\begin{aligned}\hat{X}_{t+1 / t+1}^{\prime}=\hat{X}_{t+1 / t+1}+e_{\hat{X}_{t+1 / t+1}}\end{aligned}\).

5. IBVS control scheme based on KF-AVOA-ELM and FL

In this section, we present our design of an IBVS system based on the KF-AVOA-ELM algorithm without camera calibration, and we add an FL gain adaptive method based on image feature error. According to our analysis, we find that using KF-AVOA-ELM to predict the image Jacobian matrix is robust to the internal and external camera parameters and does not need to consider the positive kinematics model of the robotic arm. Hereafter, we describe the initialization of our IBVS system, FL gain adaptive controller, and the process of operation of our IBVS system.

5.1 Initialization of proposed IBVS system

It is important to design a controller for the IBVS system (corresponding to Eq. (8)), and the performance of the controller depends on 𝜆 and \(\begin{aligned}\hat{J}_{q}^{\dagger}\end{aligned}\), where 𝜆 denotes the gain and \(\begin{aligned}\hat{J}_{q}^{\dagger} \in \mathfrak{R}^{m \times n}\end{aligned}\) is the Moore–Penrose pseudo-inverse matrix. \(\begin{aligned}\hat{J}_{q}\end{aligned}\) can be estimated by means of the KF-AVOA-ELM algorithm. In KF-AVOA-ELM, the current image features are used to estimate the image Jacobian matrix. The features are given as follows:

𝑆𝑡 = [𝑠1, ⋯ , 𝑠𝑘]𝑇 = [𝑢1, 𝑣1, ⋯ , 𝑢𝑘, 𝑣𝑘]2𝑘×1𝑇        (24)

Here, 𝑠𝑖 = [𝑢𝑖, 𝑣𝑖] denotes an image feature point. From Eqs. (12) and (14), the state vector and observation matrix can be, respectively, expressed as

𝑋𝑡/𝑡 = [𝑗11, 𝑗12, . . .𝑗𝑖](2𝑘×6)×1𝑇       (25)

\(\begin{aligned}H_{t}=\left[\begin{array}{ccc}\dot{q}(t)^{T} & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & \dot{q}(t)^{T}\end{array}\right]_{2 k \times(6 \times 2 k)}\end{aligned}\)       (26)

Here, 𝑗ij denotes the i-th row and j-th column element of \(\begin{aligned}\hat{J}_{q}(t / t)\end{aligned}\).

he initial state of the KF-AVOA-ELM algorithm strongly influences the robustness of the IBVS control strategy and the stability of the manipulator motion. In this study, we use the initialization method proposed in [34]. Firstly, we introduce the gripper m-steps linearly independent probe moving 𝑑𝑞1 ⋯ 𝑑𝑞𝑚 at the neighborhood of its initial pose. Next, we observe the changes of feature 𝑑𝑆1 ⋯ 𝑑𝑆𝑚 at m-steps. Finally, we set \(\begin{aligned}\hat{J}_{q}^{\prime}(0)\end{aligned}\) as the initial state \(\begin{aligned}\hat{X}(0)\end{aligned}\) of the KF-AVOA-ELM.

\(\begin{aligned}\hat{X}(0)=\hat{J}_{q}^{\prime}(0)=\left[d S^{1} \cdots d S^{m}\right]\left[d q^{1} \cdots d q^{m}\right]^{-1}\end{aligned}\)       (27)

In Fig. 2, the KF-AVOA-ELM state estimation algorithm estimates the optimal value of the image Jacobian matrix in real-time. Subsequently, the velocity controller controls the manipulator motion to reduce the image feature error to a minimum and further drive the manipulator to move to the desired pose. From Eqs. (1) and (8), image error 𝑒𝑆(𝑡) and joint velocity 𝑞̇(𝑡) at time t are given, respectively, as

𝑒(𝑡) = 𝑒𝑆(𝑡) = 𝑆(𝑡) − 𝑆*       (28)

\(\begin{aligned}\dot{q}(t)=-\lambda \hat{J}_{q}^{+}(t) e(t)\end{aligned}\)       (29)

5.2 FL controller

Fuzzy logic control is a controller based on rules with, a simple design and is easy to use. A fuzzy logic control system has good robustness and is suitable for nonlinear and time-varying systems. The estimation of the appropriate gain 𝜆 is very important for the controller, which can speed up the convergence rate of the IBVS system. In our approach, adaptive gain 𝜆 depends on ∥𝑒∥, 𝑑||𝑒||/𝑑t, and ∥𝑞∥. According to Eq. (29), when ∥𝑒∥ is large, we expect 𝑞̇(𝑡) to be large. Thus, we choose ∥𝑒∥ as one of the inputs to the FL unit. We expect that the image feature error exhibits a smooth decline, and this is why we choose 𝑑||𝑒||/dt as another input. Finally, according to the manipulator joint angle constraint, ∥𝑞∥ is selected as the third input to the FL unit. The FL unit shown in Fig. 2 is used for gain adaptation. Three factors, ∥𝑒∥, 𝑑||𝑒||/dt, and ∥𝑞∥, are considered for our IBVS. The fuzzy rule base of ∥𝑒∥ and 𝑑||𝑒||/dt is the same as that of the classical PD controller for IBVS, and the rules of ∥𝑞∥ are based on a previous study [35]. The fuzzy reasoning rule base is generated by these rules. The four components of the FL unit, which include the classification method, input membership function, fuzzy reasoning rule base, and reasoning method, influence the output of the FL unit. The calculation process of 𝜆 is as follows:

1. The input data ∥𝑒∥, 𝑑||𝑒||/dt, and ∥𝑞∥ are obtained.

2. Fuzzy processing via membership functions is initiated.

3. The fuzzy reasoning engine calculates the fuzzy output value according to the fuzzy reasoning rule base.

4. Parameter 𝜆 is obtained by the Mamdani reasoning method and clarification processing. The regional centroid formula for Mamdani reasoning is given as follows:

𝜇𝑖 = max( 𝑓𝑗(||𝑒||), 𝑓𝑘(𝑑||𝑒||/dt), 𝑓𝑙(||𝑞||)), ∀𝑗, 𝑘, 𝑙 ∈ {1. . . 𝑧}       (30)

\(\begin{aligned}\lambda=\frac{\sum_{i=1}^{n} \mu_{i} \int f_{i}(\lambda) \lambda d \lambda}{\sum_{i=1}^{n} \mu_{i} \int f_{i}(\lambda) d \lambda}\end{aligned}\)       (31)

Here, fj, fk, fl are the input membership functions, fi is the output membership function.

5.3 IBVS system process

The main task of the IBVS system shown in Fig. 2 is to estimate the image Jacobian matrix, obtain adaptive gain, achieve the joint velocity control of the manipulator, and make the manipulator from the current pose to the desired pose. In this study, on the basis of the image Jacobian matrix online estimation scheme, we design an error compensation based on the AVOA-ELM for the image Jacobian matrix, which improves its estimation accuracy. In order to improve the IBVS convergence rate, we design an FL unit for gain adaptation. The inputs of the FL unit are ∥𝑒∥, 𝑑||𝑒||/dt, and ∥𝑞∥. A simple description of the IBVS based on FL-KF-AVOA-ELM is given as follows. Firstly, according to Eq. (27), we obtain \(\begin{aligned}\hat{J}_{q}^{\prime}(0)\end{aligned}\) and initialize the KF-AVOA-ELM. Secondly, we calculate the observation matrix 𝐻𝑡−1 via Eq. (26). Thirdly, we compute \(\begin{aligned}\hat{J}_{q}(t / t)\end{aligned}\) by means of the KF observation and update steps based on \(\begin{aligned}\hat{J}_{q}^{\prime}(t-1 / t-1)\end{aligned}\). Meanwhile, \(\begin{aligned}e_{\hat{J}_{q}(t / t)}\end{aligned}\) is predicted by the trained AVOA-ELM model, and the optimal image Jacobian matrix estimation \(\begin{aligned}\hat{J}_{q}^{\prime}(t / t)\end{aligned}\) is obtained at time t. We compute adaptive gain 𝜆 via the FL unit according to the error feedback at time t-1. Finally, we drive the manipulator to the next pose by means of the joint velocity controller. The image feature can be obtained at time t. If the mean squared error (MSE) of the image feature error \(\begin{aligned}F(t)=\frac{1}{2} e_{S}(t)^{T} e_{S}(t)=0\end{aligned}\), the IBVS loop ends; otherwise, we estimate the image Jacobian matrix via KF-AVOA-ELM and move to the next iteration (𝑡 → 𝑡 + 1).

E1KOBZ_2022_v16n8_2529_f0002.png 이미지

Fig. 2. Schematic of proposed IBVS based on KF-AVOA-ELM and FL.

6. Simulation results

To assess the performance of the proposed IBVS system, we compare our approach with the IBVS based on KF [19], KFANN [21], FL-KF [20], BELM-SVSF-IBVS [6], and ELM-FL-IBVS [18]. In this section, we describe our MATLAB-based experiments on these systems with a robot based on the EIH model. In our experiments, the Fuzzy Logic Toolbox, Robotics Toolbox, and Machine Vision Toolbox [36], are used in these systems. The 6-DOF robotic arm PUMA 560 (the parameters of the arm can be found in [37]) is selected as the robotic arm model for the IBVS experiment. The camera focal length is 8 mm, the resolution is 1024 × 1024 px, the principal point is (512, 512), and the system control loop is 20Hz. In our study, all of the simulation experiments are based on the following assumptions: (1) no conversion between end effector frame and camera frame; (2) all points in the camera plane are not collinear.

The input training set of the AVOA-ELM is obtained from the KF. We acquire gain error \(\begin{aligned}e_{\widetilde{K}_{t+1 / t}}\end{aligned}\), state estimation error \(\begin{aligned}e_{\tilde{X}_{t+1 / t}}\end{aligned}\), and observation error \(\begin{aligned}e_{\tilde{Z}_{t+1 / t}}\end{aligned}\) for every loop of the KF. The output training set \(\begin{aligned}e_{\tilde{e}_{t+1 / t+1}}\end{aligned}\) is the difference of the desired state estimation and the KF state estimation. A total of 501 samples are trained for the AVOA-ELM model. The model has 104 hidden nodes and 48 output nodes. The best validation performance with the RMSE of the training is 0.0028. It is seen that the error compensation model is effective. In addition, we use optimized ELM via differential evolution (DE), whale optimization algorithm (WOA), grasshopper optimization algorithm (GOA), particle swarm optimization (PSO), sine cosine algorithm (SCA) and AVOA to conduct experiments on this data set. We compare the RMSE of each ELM during the experiment and the number of iterations. The prediction accuracy of AVOA-ELM in the iterative process is superior to other algorithms that optimize ELM in Fig. 3.

E1KOBZ_2022_v16n8_2529_f0003.png 이미지

Fig. 3. Iterative RMSE results of AVOAELM and other optimized ELM.

In order to evaluate the IBVS system, we proposed three evaluation metrics are considered in this paper, including the convergence rate, end effector trajectory length, and error costs. The evaluation metrics are defined as follows:

\(\begin{aligned}n_{c}=\arg \min _{1 \leq n \leq \infty}\left(\|e(n)\| \leq e_{t h r}\right)\end{aligned}\)       (35)

len𝑐 = ∑n=2nc||𝑝𝑛 − 𝑝𝑛−1||       (36)

𝑒IAE = ∑n=1nc ||𝑒(𝑛)||       (37)

Here, n indicates the iteration number, 𝑒(𝑛) the image feature error of the nth iteration, where 1 ≤ 𝑛 ≤ 𝑛𝑐, 𝑒𝑡ℎ𝑟 is the threshold for error convergence which is defined according to accuracy requirements and 𝑝𝑛 is the coordinate of the end effector at the nth iteration. The end effector trajectory length in 3D space is defined as len𝑐, while 𝑒IAE denotes the sum of each iteration error. Considering our IBVS system for real-time applications, we choose these three-evaluation metrics. Based on the evaluation metric of the convergence rate, we can estimate the time taken by each IBVS system. Here, we note that in [38], evaluate VS system using span image area, track length, and curvature. The end-effector trajectory length is also a useful cost estimate. If the error cost is less, it is understood that the convergence rate of the feature error is fast.

Case 1: Comparison with IBVS based on KF, KFANN and FL-KF

The results of our method and three other algorithms are shown in Fig. 4. Uniformly distributed random noises are added in these IBVS systems based on KF, KFANN, FL-KF, and FL-KF-AVOA-ELM wherein the noise has zero mean and the variance is 0.1. From Fig. 4, it is obvious that the FL-KF-AVOA-ELM IBVS system offers a faster convergence rate than the others. The end-effector trajectories are similar for all four IBVS systems. The lenc values of the four IBVS systems are 0.2743, 0.2698, 0.2687, and 0.2619m, in the order listed. These values are very close to each other because the desired position is close to the initial position; however, our method also has a smaller lenc value. The error costs of the four IBVS systems are 2.11e + 4, 2.10e + 4, 1.08e + 4 and 8.21e + 3 in that order. A lower error cost indicates that the proposed IBVS system tends to limit error oscillations. To illustrate the superior performance of the AVOA-ELM error compensation model, the results of FL-KF IBVS and FL-KF-AVOA-ELM IBVS are shown in Fig. 4 (c) and (d). The lenc values are 0.2687 and 0.2619, the error costs are 1.08e + 4 and 8.21e + 3, and the 𝑛𝑐 values are 192 and 92. The AVOA-ELM error compensation model improves three performance indexes of the IBVS control, reducing trajectory length and error cost, and further improving the convergence rate of the IBVS system.

E1KOBZ_2022_v16n8_2529_f0004.png 이미지

Fig. 4. Results for case 2. Rows 1 to 4 are IBVS based on (a) KF, (b) KFANN, (c) FL-KF, and (d) FL-KF-AOVA-ELM, respectively. Columns 1, 2, and 3 correspond to (1) feature trajectory, (2) end effector trajectory, and (3) feature error, respectively.

Case 2: Comparison with BELM-SVSF-IBVS and ELM-FL-IBVS

To further verify the superior performance of our proposed method, in this case, we compare the proposed FL-KF-AVOA-ELM method with two recent methods, BELM-SVSF [6] and ELM-FL [18] respectively. We conducted comparative experiments under the same initial conditions (same feature points, desired features, and initial joint angles of the manipulator). The comparison results are shown in Table 1. From the Con.Rate, we can observe that BELM-SVSF and FL-KF-AVOA-ELM obtain the same result (92), and ELM-FL obtains a relatively poor result (124). From the Traj.Len, we can observe that BELM-SVSF obtains the best result (0.2581), FL-KF-AVOA-ELM obtains the suboptimal result (0.2654), and ELM-FL obtains a worst result (8.21e + 3). From the Err.Costs, we can observe that BELM-SVSF obtains the best result (1.0218e + 4), FL-KF-AVOA-ELM obtains the suboptimal result (0.2654), and ELM-FL obtains a worst result (2.0518e + 4). Overall, our proposed method has the same or even better performance than the existing two methods.

Table 1. Simulation results of BELM-SVSF-IBVS, ELM-FL-IBVS and FL-KF-AVOA-ELM

E1KOBZ_2022_v16n8_2529_t0002.png 이미지

Case 3: robustness of FL-KF-AVOA-ELM IBVS system

In order to prove that our IBVS is more robust to uniformly distributed random noise, we conducted comparative experiments with the variance of 0.2, the variance of 0.4, and the introduction of colored noise respectively. Figs. 5-6 (sub-cases 1-3, respectively) show the results. The 𝑒𝑡ℎ𝑟 value is 0.5 in this case, and it is used to compare the robustness of our IBVS with respect to those of KF IBVS, KFANN IBVS, and FL-KF IBVS.

In sub-case 1, we add uniformly distributed random noise with 0 mean and 0.2 variance to each system. Seen From Fig. 5 that the feature trajectory for each method is very similar, but our method yields better end-effector trajectories and feature error convergence rate. In the study, we calculated the mean of 50 experiments for each metric (Table 2). The 𝑛𝑐 values of each method are 432, 391, 208, and 96; it is obvious that the convergence of our IBVS is faster than those of the other three IBVS methods. The len𝑐 values of the four methods are 0.2788, 0.2681, 0.2727, and 0.2621 m in the order listed. These values are very close to each other because the expected position is close to the initial position. However, our method also reduces the length, which enables a reduction in manipulator motion power consumption. The error costs of the four methods are 2.12e + 4, 2.10e + 4, 1.09e + 4, and 8.21e + 3. The lower error cost of our approach indicates that our method exhibits lower error oscillations. This result also indicates that our method is more stable during the convergence process.

E1KOBZ_2022_v16n8_2529_f0005.png 이미지

Fig. 5. Results of variance = 0.2. Rows 1 to 4 illustrate the results obtained with IBVS based on (a) KF, (b) KFANN, (c) FL-KF, and (d) FL-KF-AOVA-ELM, respectively. Columns 1 to 3 correspond to (1) feature trajectory, (2) end effector trajectory, and (3) feature error, respectively.

Table 2. Simulation results of FL-KF-AVOA-ELM and other algorithms with different noise.

E1KOBZ_2022_v16n8_2529_t0003.png 이미지

When the variance of uniformly distributed random noise is increased to 0.4, the simulation results are given in Fig. 6 and the corresponding metrics are listed in Table 2. Upon comparing Fig. 5, and Fig. 6, we note that the feature trajectories of the different methods for different uniformly distributed random noises are very similar. However, when the disturbance noise changes, the end-effector trajectories of the other methods exhibit significant changes; that is, our proposed IBVS is more stable for uniformly distributed random noises. From Table 2, we can infer that our method is more stable in terms of metrics 𝑛𝑐 and len𝑐. Our method also has the lowest error cost under different disturbance noise conditions. A lower error cost indicates that our method affords lower error oscillations.

E1KOBZ_2022_v16n8_2529_f0006.png 이미지

Fig. 6. Results of variance = 0.4. Rows 1 to 4 depict the results obtained by means of IBVS based on (a) KF, (b) KFANN, (c) FL-KF, and (d) FL-KF-AVOA-ELM, respectively. Columns 1 to 3 correspond to (1) feature trajectory, (2) end effector trajectory, and (3) feature error, respectively.

To verify that the proposed IBVS based on FL-KF-AVOA-ELM has better performance even with the introduction of colored noise, we add colored noise to the system as random disturbance noise. The random noise introduced is as follows:

e(k) = x(k) + 0.5x(k − 1)       (38)

where x(k) is the white noise has zero mean and the variance is 0.1. Table 2 lists the corresponding indicators. As shown in Table 2, we can observe that the proposed IBVS based on FL-KF-AVOA-ELM achieved the best results on all three evaluation metrics.

Case 4: non coplanar point experiment

To further illustrate the performance of the proposed algorithm, this section uses the non coplanar feature points in space for experiments, the actual coordinates of feature points are:

\(\begin{aligned}P^{\prime}=\left[\begin{array}{cccc}0.25 & 0.25 & -0.25 & -0.25 \\ -0.25 & 0.25 & 0.25 & -0.25 \\ 1.50 & 1.48 & 1.50 & 1.52\end{array}\right]\end{aligned}\)       (39)

In this case, we add uniformly distributed random noise with a mean of 0 and a variance of 0.1 to each IBVS method. Table 3 shows the measurement results of several IBVS methods. From Table 3, we can know that the three metrics of our method are 165, 0.2601, and 8.71e + 3. The IBVS method proposed in this paper has the fastest convergence speed, the shortest trajectory length, and the smallest error cost. It is easy to know that our IBVS performs better than other comparison algorithms when non coplanar point in the target space.

Table 3. Simulation results of FL-KF-AVOA-ELM and other algorithms with non-coplanar points.

E1KOBZ_2022_v16n8_2529_t0004.png 이미지

Case 5: ablation experiment

To verify the effectiveness of the proposed FL-KF-AVOA-ELM IBVS, we performed ablation experiment on the proposed method. We added uniformly distributed random noise with 0 mean value and 0.1 variance into each IBVS method, and other conditions were set the same as in Case 1. The results of the ablation experiment are shown in Tabel 4.

First, we analyze whether the FL unit can really improve the convergence rate of IBVS. By comparing the second row with the third row in Table 4, we observed that Con.Rate can be reduced by 234 if we used the FL unit. It confirms that the FL unit can improve the convergence rate of IBVS. Second, we discuss the importance to use KF in IBVS. By comparing the first row with the third row in Table 4, we observed that Traj.Len can be reduced by 0.782 if we used KF. Third, we discuss the importance to use the ELM error compensation model in IBVS. By comparing row 3 with row 4 in Table 4, we observed that IBVS using the ELM error compensation model has a lower error cost (reduced by 79) and shorter trajectory length (reduced by 0.0032) than those not used. It indicates that the ELM error compensation model is crucial to our IBVS based on FL-KF-AVOA-ELM. Last but not least, we analyze whether AVOA can improve the performance of ELM, thus further enabling the IBVS to obtain better results in three indicators. By comparing row 4 with row 5 in Table 4, we observed that all three evaluation metrics can be improved to some extent if we use AVOA to optimize the bias and input weight of ELM. Specifically, they are reduced by 21, 0.0036, and 0.36e + 03, respectively.

Table 4. Ablation of FL-KF-AVOA-ELM

E1KOBZ_2022_v16n8_2529_t0005.png 이미지

7. Conclusion

The proposed FL-KF-AVOA-ELM IBVS system does not require calibration of the camera. Moreover, our IBVS can solve three problems in the case of uncalibrated IBVS to a certain extent: the perturbation noises of the robot system, error of noise statistics, and slow convergence. First, the IBVS proposed by us uses KF to online estimate the image Jacobian matrix and uses an AVOA-ELM error compensation model to compensate for the suboptimal estimation of KF, so as to solve the problems of IBVS interference noise and noise statistical error. Then, the IBVS proposed by us uses the FL unit to adjust the control rate adaptively, so as to solve the problems of IBVS with slow convergence. Finally, we verify the superiority of our proposed method through simulation experiments, and we use three evaluation indicators to appraise the proposed FL-KF-AVOA-ELM IBVS. We compared classical IBVS, IBVS based on KF, KFANN, FL-KF, and ELM with proposed FL-KF-AVOA-ELM IBVS based on these metrics. The proposed FL-KF-AVOA-ELM IBVS can suitably address non-Gaussian perturbation error and improve the convergence rate. The KF method and our proposed method were simulated under different system dynamic disturbance noises. The proposed FL-KF-AVOA-ELM IBVS has strong ability to resist interference noise. The ablation experiment proved that the proposed FL-KF-AVOA-ELM IBVS is scientific.

Appendix

Here, we summarize and describe all the variables in Table 5.

Table 5. Variables table.

E1KOBZ_2022_v16n8_2529_t0006.png 이미지

Acknowledgement

This work is supported by Key R&D Program of Zhejiang Province (No. 2021C03013) and Zhejiang Provincial Natural Science Foundation of China (No. LZ20F020003).

References

  1. Jinhui Wu, Zhehao Jin, Andong Liu, Li Yu, Fuwen Yang, "A survey of learning-based control of robotic visual servoing systems," Journal of the Franklin Institute, vol. 359, pp. 556-577, 2022. https://doi.org/10.1016/j.jfranklin.2021.11.009
  2. Ghufran Ahmad Khan, Jie Hu, Tianrui Li, Bassoma Diallo and Hongjun Wang, "Multi-view data clustering via non-negative matrix factorization with manifold regularization," International Joural of Machine Learning and Cybernetics, vol.13, pp. 677-689, 2022. https://doi.org/10.1007/s13042-021-01307-7
  3. Bassoma Diallo, Jie Hu, Tianrui Li, Ghufran Ahmad Khan and Ahmed Saad Hussein, "Multiview document clustering based on geometrical similarity measurement," International Joural of Machine Learning and Cybernetics, vol 13, pp.663-675, 2022. https://doi.org/10.1007/s13042-021-01295-8
  4. Wilson, W.J., Hulls, C.C.W. and Bell, G.S., "Relative end-effector control using cartesian positionbased visual servoing," IEEE Transactions on Robotics and Automation, vol. 12, pp. 684-696, 1996. https://doi.org/10.1109/70.538974
  5. Xianxia Zhang, Jinqiang Zhang, Zhiyuan Li, Shiwei Ma and Banghua Yang, "Visual feedback fuzzy control for a robot manipulator based on SVR learning," Journal of System Simulation, vol. 32, no. 10, 2020.
  6. X. Ren, H. Li, Y. Li, "Image-based visual servoing control of robot manipulators using hybrid algorithm with feature constraints," IEEE Access, vol. 8, pp. 223495-223508, 2020. https://doi.org/10.1109/ACCESS.2020.3042207
  7. W. Li, C. Song, Z. Li, "An accelerated recurrent neural network for visual servo control of a robotic flexible endoscope with joint limit constraint," IEEE Transactions on Industrial Electronics, vol. 67, no. 12, pp. 10787-10797, Dec. 2020. https://doi.org/10.1109/tie.2019.2959481
  8. S. Kagami, K. Omi, K. Hashimoto, "Alignment of a flexible sheet object with position- based and image-based visual servoing," Advanced Robotics, vol. 30, no. 15, pp. 965-978, 2016. https://doi.org/10.1080/01691864.2016.1183518
  9. Z. Zhou, B. Wu, "Adaptive sliding mode control of manipulators based on fuzzy random vector function links for friction compensation," OPTIK, vol. 227, Feb. 2021.
  10. Z. Zhou, R. Zhang, Z. Zhu, "Robust Kalman filtering with LSTM for image-based visual servo control," Multimedia Tools and Applications, vol. 78, no. 18, pp. 26341-26371, Sep. 2019. https://doi.org/10.1007/s11042-019-07773-0
  11. J. Dong, J Zhang, "A new imaged-based visual servoing method with velocity direction control," J. Franklin. Inst, vol. 357, no. 7, pp. 3993-4007, 2020. https://doi.org/10.1016/j.jfranklin.2020.01.012
  12. Cui L, Wang H, Liang X, Wang J, Chen W, "Visual servoing of a flexible aerial refueling boom with an eye-in-hand camera," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 10, pp. 6282-6292, Oct. 2021. https://doi.org/10.1109/TSMC.2019.2957992
  13. G. Q. Dong, Z. H. Zhu, "Position-based visual servo control of autonomous robotic manipulators," Acta Astronautica, vol. 115, pp. 291-302, Oct-Nov. 2015. https://doi.org/10.1016/j.actaastro.2015.05.036
  14. Yaozhen He, Jian Gao, Yimin Chen, "Deep learning-based pose prediction for visual servoing of robotic manipulators using image similarity," Neurocomputing, vol. 491, pp. 343-352, 2022. https://doi.org/10.1016/j.neucom.2022.03.045
  15. T. Drummond, R. Cipolla, "Real-time tracking of complex structures with on-line camera calibration," Image & Vision Computing, vol. 20, pp. 427-433, Apr. 2002. https://doi.org/10.1016/S0262-8856(02)00013-6
  16. F. Lizarralde, A. C. Leite, L. Hsu, R. R. Costa, "Adaptive visual servoing scheme free of image velocity measurement for uncertain robot manipulators," Automatica, vol. 49, no. 5, pp. 1304-1309, May. 2013. https://doi.org/10.1016/j.automatica.2013.01.047
  17. Y. Wang, G. L. Zhang, H. X. Lang, B. S. Zuo, C. W. Silva, "A modified image-based visual servo controller with hybrid camera configuration for robust robotic grasping," Robotics and Autonomous Systems, vol. 62, no. 10, pp. 1398-1407, Oct. 2014. https://doi.org/10.1016/j.robot.2014.06.003
  18. Tolga Yuksel, "Intelligent visual servoing with extreme learning machine and fuzzy logic," Expert Systems with Applications, vol. 72, pp. 344-356, 2017. https://doi.org/10.1016/j.eswa.2016.10.048
  19. J. Qian, J. B. Su, "On-line estimation of image Jacobian matrix based on Kalman filter," Control and Decision, vol. 18, no. 1, pp. 77-80, Oct. 2003. https://doi.org/10.3321/j.issn:1001-0920.2003.01.017
  20. X. Lv, X. Huang, "Fuzzy adaptive Kalman filtering based estimation of image Jacobin for uncalibrated visual servoing," in Proc. of the IEEE/RSJ/GI Conference Intelligent Robots and Systems, pp. 2167-2172, Oct. 2006.
  21. X. G. Zhong, X. Y. Zhong, X. F. Peng, "Robots visual servo control with features constraint employing Kalman-neural-network filtering scheme," Neurocomputing, vol. 151, pp. 268-277, May. 2015. https://doi.org/10.1016/j.neucom.2014.09.043
  22. H. A. Junaid, "ANN based robotic arm visual servoing nonlinear system," Procedia Computer Science, vol. 62, pp. 23-30, 2015. https://doi.org/10.1016/j.procs.2015.08.405
  23. Z. Miljkovic, M. Mitic, M. Lazarevic, B. Babic, "Neural network reinforcement learning for visual control of robot manipulators," Expert Systems with Applications, vol. 40, no. 5, pp. 1721-1736, Apr. 2013. https://doi.org/10.1016/j.eswa.2012.09.010
  24. Maxwell Hwang, Yu-Jen Chen, Ming-Yi Ju, Wei-Cheng Jiang, "A fuzzy CMAC learning approach to image based visual servoing system," Information Sciences, vol.576, pp.187-203, 2021, https://doi.org/10.1016/j.ins.2021.06.029
  25. O. Araar, N. Aouf, "A new hybrid approach for the visual servoing of VTOL UAVs from unknown geometries," in Proc. of the IEEE 22nd Mediterranean Conference on Control and Automation, pp. 1425-1432, 2014.
  26. Gossaye Mekonnen, Sanjeev Kumar, P.M. Pathak, "Wireless hybrid visual servoing of omnidirectional wheeled mobile robots," Robotics and Autonomous Systems, vol. 75, pp. 450-462,Jan, 2016, https://doi.org/10.1016/j.robot.2015.08.008
  27. R. E. Kalman, "A new approach to linear filtering and prediction problems," J. Basic Eng, vol. 82, pp. 35-45, 1960. https://doi.org/10.1115/1.3662552
  28. S. Y. Chen, "Kalman filter for robot vision: a survey," IEEE Trans. Ind. Electron, vol. 59, no. 11, pp. 4409-4420, Nov. 2012. https://doi.org/10.1109/TIE.2011.2162714
  29. G. B. Huang, Q. Y. Zhu, C. K. Siew, "Extreme learning machine: Theory and applications," Neurocomputing, vol. 70, pp. 489-501, Dec. 2006. https://doi.org/10.1016/j.neucom.2005.12.126
  30. G. B. Huang, L. Chen, C. K. Siew, "Universal Approximation Using Incremental Constructive Feedforward Networks with Random Hidden Nodes," IEEE Transactions on Neural Networks, vol. 17, no. 4, pp. 879-892, 2006. https://doi.org/10.1109/TNN.2006.875977
  31. Y. Zhang, G. Zhao, G., J. Sun, "Smart pathological brain detection by synthetic minority oversampling technique, extreme learning machine, and jaya algorithm," Multimedia Tools and Applications, vol. 77, no. 17, pp. 22629-22648, Sep. 2018. https://doi.org/10.1007/s11042-017-5023-0
  32. G. B. Huang, H. M. Zhu, X. J. Ding, R. Zhang, "Extreme learning machine for regression and multiclass classification," IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 42 no. 2, pp. 513-529, Apr. 2012. https://doi.org/10.1109/TSMCB.2011.2168604
  33. Abdollahzadeh, B., F. S. Gharehchopogh, S. Mirjalili, "African vultures optimization algorithm: a new nature-inspired metaheuristic algorithm for global optimization problems," Computers & Industrial Engineering, vol. 158, Aug. 2021.
  34. H. Sutanto, R. Sharma, V. Varma, "Image based autodocking without calibration," in Proc. of IEEE International Conference on Robotics and Automation, pp. 974-979, Apr. 1997.
  35. S. R. Jang, C. T. Sun, "Neuro-fuzzy and soft computing: A computational approach to learning and machine intelligence," IEEE Transactions on Automatic Control, vol. 42, no. 10, pp. 1482-1484, Oct. 1997. https://doi.org/10.1109/TAC.1997.633847
  36. P. I. Corke, "Robotics, vision and control: Fundamental algorithms in MATLAB," Springer, 2011.
  37. B. Armstrong, O. Khatib, J. Burdick, "The explicit dynamic model and inertial parameters of the PUMA 560 arm," IEEE international conference on robotics and automation, pp. 510-518, Apr. 1986.
  38. G. Chesi, Y. S. Hung, "Global path-planning for constrained and optimal visual servoing," IEEE Transactions on Robotics, vol. 23, no. 5, pp. 1050-1060, Oct. 2007. https://doi.org/10.1109/TRO.2007.903817