• 제목/요약/키워드: Subspace

검색결과 721건 처리시간 0.101초

Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm for the Better Subspace Estimation Accuracy

  • Lim, Jun-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • 제27권1E호
    • /
    • pp.25-29
    • /
    • 2008
  • The projection approximation subspace tracking (PAST) is one of the attractive subspace tracking algorithms, because it estimatesthe signal subspace adaptively and continuously. Furthermore, the computational complexity is relatively low. However, the algorithm still has room for improvement in the subspace estimation accuracy. In this paper, we propose a new algorithm to improve the subspace estimation accuracy using a normally ordered input vector and a reversely ordered input vector simultaneously.

ON MULTI SUBSPACE-HYPERCYCLIC OPERATORS

  • Moosapoor, Mansooreh
    • 대한수학회논문집
    • /
    • 제35권4호
    • /
    • pp.1185-1192
    • /
    • 2020
  • In this paper, we introduce and investigate multi subspace-hypercyclic operators and prove that multi-hypercyclic operators are multi subspace-hypercyclic. We show that if T is M-hypercyclic or multi M-hypercyclic, then Tn is multi M-hypercyclic for any natural number n and by using this result, make some examples of multi subspace-hypercyclic operators. We prove that multi M-hypercyclic operators have somewhere dense orbits in M. We show that analytic Toeplitz operators can not be multi subspace-hypercyclic. Also, we state a sufficient condition for coanalytic Toeplitz operators to be multi subspace-hypercyclic.

재무부실화 예측을 위한 랜덤 서브스페이스 앙상블 모형의 최적화 (Optimization of Random Subspace Ensemble for Bankruptcy Prediction)

  • 민성환
    • 한국IT서비스학회지
    • /
    • 제14권4호
    • /
    • pp.121-135
    • /
    • 2015
  • Ensemble classification is to utilize multiple classifiers instead of using a single classifier. Recently ensemble classifiers have attracted much attention in data mining community. Ensemble learning techniques has been proved to be very useful for improving the prediction accuracy. Bagging, boosting and random subspace are the most popular ensemble methods. In random subspace, each base classifier is trained on a randomly chosen feature subspace of the original feature space. The outputs of different base classifiers are aggregated together usually by a simple majority vote. In this study, we applied the random subspace method to the bankruptcy problem. Moreover, we proposed a method for optimizing the random subspace ensemble. The genetic algorithm was used to optimize classifier subset of random subspace ensemble for bankruptcy prediction. This paper applied the proposed genetic algorithm based random subspace ensemble model to the bankruptcy prediction problem using a real data set and compared it with other models. Experimental results showed the proposed model outperformed the other models.

Interference Suppression Using Principal Subspace Modification in Multichannel Wiener Filter and Its Application to Speech Recognition

  • Kim, Gi-Bak
    • ETRI Journal
    • /
    • 제32권6호
    • /
    • pp.921-931
    • /
    • 2010
  • It has been shown that the principal subspace-based multichannel Wiener filter (MWF) provides better performance than the conventional MWF for suppressing interference in the case of a single target source. It can efficiently estimate the target speech component in the principal subspace which estimates the acoustic transfer function up to a scaling factor. However, as the input signal-to-interference ratio (SIR) becomes lower, larger errors are incurred in the estimation of the acoustic transfer function by the principal subspace method, degrading the performance in interference suppression. In order to alleviate this problem, a principal subspace modification method was proposed in previous work. The principal subspace modification reduces the estimation error of the acoustic transfer function vector at low SIRs. In this work, a frequency-band dependent interpolation technique is further employed for the principal subspace modification. The speech recognition test is also conducted using the Sphinx-4 system and demonstrates the practical usefulness of the proposed method as a front processing for the speech recognizer in a distant-talking and interferer-present environment.

직교설 전후방 PAST (Projection Approximation Subspace Tracking) 알고리즘 (Orthonormalized Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm)

  • 임준석
    • 한국음향학회지
    • /
    • 제28권6호
    • /
    • pp.514-519
    • /
    • 2009
  • PAST (projection approximation subspace tracking)는 여러 연구자들에 의해 비교 연구가 되는 대표적인 신호 부공간을 추정하는 알고리즘이다. 이 방법은 신호 부공간 추정에 있어 상대적으로 낮은 계산 복잡도를 요구하기 때문에 인기가 있는 방법이 되었다. 그러나 이 방법은 개선의 여지가 많아서 추정 정확도나 공간의 직교성 등에서 계속 개선된 알고리즘이 연구되고 있다. 본 논문은 기존에 연구된 PAST 알고리즘의 추정 정확도 개선을 위해 연구된 알고리즘 중 하나인 FB-PAST(Forward-Backward PAST) 알고리즘에 사용할 직교성 알고리즘을 제안한다.

ON SUBSPACE-SUPERCYCLIC SEMIGROUP

  • El Berrag, Mohammed;Tajmouati, Abdelaziz
    • 대한수학회논문집
    • /
    • 제33권1호
    • /
    • pp.157-164
    • /
    • 2018
  • A $C_0$-semigroup ${\tau}=(T_t)_{t{\geq}0}$ on a Banach space X is called subspace-supercyclic for a subspace M, if $\mathbb{C}Orb({\tau},x){\bigcap}M=\{{\lambda}T_tx\;:\;{\lambda}{\in}\mathbb{C},\;t{\geq}0\}{\bigcap}M$ is dense in M for a vector $x{\in}M$. In this paper we characterize the notion of subspace-supercyclic $C_0$-semigroup. At the same time, we also provide a subspace-supercyclicity criterion $C_0$-semigroup and offer two equivalent conditions of this criterion.

Tutorial: Dimension reduction in regression with a notion of sufficiency

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • 제23권2호
    • /
    • pp.93-103
    • /
    • 2016
  • In the paper, we discuss dimension reduction of predictors ${\mathbf{X}}{\in}{{\mathbb{R}}^p}$ in a regression of $Y{\mid}{\mathbf{X}}$ with a notion of sufficiency that is called sufficient dimension reduction. In sufficient dimension reduction, the original predictors ${\mathbf{X}}$ are replaced by its lower-dimensional linear projection without loss of information on selected aspects of the conditional distribution. Depending on the aspects, the central subspace, the central mean subspace and the central $k^{th}$-moment subspace are defined and investigated as primary interests. Then the relationships among the three subspaces and the changes in the three subspaces for non-singular transformation of ${\mathbf{X}}$ are studied. We discuss the two conditions to guarantee the existence of the three subspaces that constrain the marginal distribution of ${\mathbf{X}}$ and the conditional distribution of $Y{\mid}{\mathbf{X}}$. A general approach to estimate them is also introduced along with an explanation for conditions commonly assumed in most sufficient dimension reduction methodologies.

APPROXIMATION PROPERTIES OF PAIRS OF SUBSPACES

  • Lee, Keun Young
    • 대한수학회보
    • /
    • 제56권3호
    • /
    • pp.563-568
    • /
    • 2019
  • This study is concerned with the approximation properties of pairs. For ${\lambda}{\geq}1$, we prove that given a Banach space X and a closed subspace $Z_0$, if the pair ($X,Z_0$) has the ${\lambda}$-bounded approximation property (${\lambda}$-BAP), then for every ideal Z containing $Z_0$, the pair ($Z,Z_0$) has the ${\lambda}$-BAP; further, if Z is a closed subspace of X and the pair (X, Z) has the ${\lambda}$-BAP, then for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0$ such that the pair ($Y,Y{\cap}Z$) has the ${\lambda}$-BAP. We also prove that if Z is a separable closed subspace of X, then the pair (X, Z) has the ${\lambda}$-BAP if and only if for every separable subspace $Y_0$ of X, there exists a separable closed subspace Y containing $Y_0{\cup}Z$ such that the pair (Y, Z) has the ${\lambda}$-BAP.

사례 선택 기법을 활용한 앙상블 모형의 성능 개선 (Improving an Ensemble Model Using Instance Selection Method)

  • 민성환
    • 산업경영시스템학회지
    • /
    • 제39권1호
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

HEREDITARY PROPERTIES OF CERTAIN IDEALS OF COMPACT OPERATORS

  • Cho, Chong-Man;Lee, Eun-Joo
    • 대한수학회보
    • /
    • 제41권3호
    • /
    • pp.457-464
    • /
    • 2004
  • Let X be a Banach space and Z a closed subspace of a Banach space Y. Denote by L(X, Y) the space of all bounded linear operators from X to Y and by K(X, Y) its subspace of compact linear operators. Using Hahn-Banach extension operators corresponding to ideal projections, we prove that if either $X^{**}$ or $Y^{*}$ has the Radon-Nikodym property and K(X, Y) is an M-ideal (resp. an HB-subspace) in L(X, Y), then K(X, Z) is also an M-ideal (resp. HB-subspace) in L(X, Z). If L(X, Y) has property SU instead of being an M-ideal in L(X, Y) in the above, then K(X, Z) also has property SU in L(X, Z). If X is a Banach space such that $X^{*}$ has the metric compact approximation property with adjoint operators, then M-ideal (resp. HB-subspace) property of K(X, Y) in L(X, Y) is inherited to K(X, Z) in L(X, Z).