• Title, Summary, Keyword: rate of convergence

Search Result 2,431, Processing Time 0.042 seconds

Convergence Rate Improvement of the Blind Equalization Algorithm for QAM System using Selective NCMA (QAM 시스템에 선택적으로 NCMA를 적용한 블라인드 등화 알고리즘의 수렴속도 개선)

  • 강윤석;안상식
    • Proceedings of the IEEK Conference
    • /
    • /
    • pp.43-46
    • /
    • 1999
  • Blind equalizers recover the transmitted data using signal's statistical characteristics only. Because of its computational simplicity and fast convergence rate, CMA is widely used in practice. Blind equalizers, however, converge much slowly than conventional equalizers which use the training signals. In order to improve the convergence rate, many modified blind equalization algorithms have been proposed. Among those, Normalized CMA (NCMA) was applied to increase the convergence rate by using the large step size. Unfortunately it can only be applied for the constant modulus signal constellation scheme. this paper, we propose the Selective NCMA (SNCMA) that improve the convergence rate of blind equalization algorithms by using NCMA for non-constant modulus signalling method such as QAM constellation. We achieved fast start-up convergence rate and reduced steady-state residual error.

  • PDF

CONVERGENCE RATE OF EXTREMES FOR THE GENERALIZED SHORT-TAILED SYMMETRIC DISTRIBUTION

  • Lin, Fuming;Peng, Zuoxiang;Yu, Kaizhi
    • Bulletin of the Korean Mathematical Society
    • /
    • v.53 no.5
    • /
    • pp.1549-1566
    • /
    • 2016
  • Denote $M_n$ the maximum of n independent and identically distributed variables from the generalized short-tailed symmetric distribution. This paper shows the pointwise convergence rate of the distribution of $M_n$ to exp($\exp(-e^{-x})$) and the supremum-metric-based convergence rate as well.

ON THE ORDER AND RATE OF CONVERGENCE FOR PSEUDO-SECANT-NEWTON'S METHOD LOCATING A SIMPLE REAL ZERO

  • Kim, Young Ik
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.19 no.2
    • /
    • pp.133-139
    • /
    • 2006
  • By combining the classical Newton's method with the pseudo-secant method, pseudo-secant-Newton's method is constructed and its order and rate of convergence are investigated. Given a function $f:\mathbb{R}{\rightarrow}\mathbb{R}$ that has a simple real zero ${\alpha}$ and is sufficiently smooth in a small neighborhood of ${\alpha}$, the convergence behavior is analyzed near ${\alpha}$ for pseudo-secant-Newton's method. The order of convergence is shown to be cubic and the rate of convergence is proven to be $\(\frac{f^{{\prime}{\prime}}(\alpha)}{2f^{\prime}(\alpha)}\)^2$. Numerical experiments show the validity of the theory presented here and are confirmed via high-precision programming in Mathematica.

  • PDF

ON EXACT CONVERGENCE RATE OF STRONG NUMERICAL SCHEMES FOR STOCHASTIC DIFFERENTIAL EQUATIONS

  • Nam, Dou-Gu
    • Bulletin of the Korean Mathematical Society
    • /
    • v.44 no.1
    • /
    • pp.125-130
    • /
    • 2007
  • We propose a simple and intuitive method to derive the exact convergence rate of global $L_{2}-norm$ error for strong numerical approximation of stochastic differential equations the result of which has been reported by Hofmann and $M{\"u}ller-Gronbach\;(2004)$. We conclude that any strong numerical scheme of order ${\gamma}\;>\;1/2$ has the same optimal convergence rate for this error. The method clearly reveals the structure of global $L_{2}-norm$ error and is similarly applicable for evaluating the convergence rate of global uniform approximations.

The dynamics of self-organizing feature map with constant learning rate and binary reinforcement function (시불변 학습계수와 이진 강화 함수를 가진 자기 조직화 형상지도 신경회로망의 동적특성)

  • Seok, Jin-Uk;Jo, Seong-Won
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.2 no.2
    • /
    • pp.108-114
    • /
    • 1996
  • We present proofs of the stability and convergence of Self-organizing feature map (SOFM) neural network with time-invarient learning rate and binary reinforcement function. One of the major problems in Self-organizing feature map neural network concerns with learning rate-"Kalman Filter" gain in stochsatic control field which is monotone decreasing function and converges to 0 for satisfying minimum variance property. In this paper, we show that the stability and convergence of Self-organizing feature map neural network with time-invariant learning rate. The analysis of the proposed algorithm shows that the stability and convergence is guranteed with exponentially stable and weak convergence properties as well.s as well.

  • PDF

Rate of Convergence in Inviscid Limit for 2D Navier-Stokes Equations with Navier Fricition Condition for Nonsmooth Initial Data

  • Kim, Namkwon
    • Journal of the Chosun Natural Science
    • /
    • v.6 no.1
    • /
    • pp.53-56
    • /
    • 2013
  • We are interested in the rate of convergence of solutions of 2D Navier-Stokes equations in a smooth bounded domain as the viscosity tends to zero under Navier friction condition. If the initial velocity is smooth enough($u{\in}W^{2,p}$, p>2), it is known that the rate of convergence is linearly propotional to the viscosity. Here, we consider the rate of convergence for nonsmooth velocity fields when the gradient of the corresponding solution of the Euler equations belongs to certain Orlicz spaces. As a corollary, if the initial vorticity is bounded and small enough, we obtain a sublinear rate of convergence.

Optimal Convergence Rate of Empirical Bayes Tests for Uniform Distributions

  • Liang, Ta-Chen
    • Journal of the Korean Statistical Society
    • /
    • v.31 no.1
    • /
    • pp.33-43
    • /
    • 2002
  • The empirical Bayes linear loss two-action problem is studied. An empirical Bayes test $\delta$$_{n}$ $^{*}$ is proposed. It is shown that $\delta$$_{n}$ $^{*}$ is asymptotically optimal in the sense that its regret converges to zero at a rate $n^{-1}$ over a class of priors and the rate $n^{-1}$ is the optimal rate of convergence of empirical Bayes tests.sts.