• Title/Summary/Keyword: Subspace

Search Result 711, Processing Time 0.154 seconds

Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm for the Better Subspace Estimation Accuracy

  • Lim, Jun-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1E
    • /
    • pp.25-29
    • /
    • 2008
  • The projection approximation subspace tracking (PAST) is one of the attractive subspace tracking algorithms, because it estimatesthe signal subspace adaptively and continuously. Furthermore, the computational complexity is relatively low. However, the algorithm still has room for improvement in the subspace estimation accuracy. In this paper, we propose a new algorithm to improve the subspace estimation accuracy using a normally ordered input vector and a reversely ordered input vector simultaneously.

ON MULTI SUBSPACE-HYPERCYCLIC OPERATORS

  • Moosapoor, Mansooreh
    • Communications of the Korean Mathematical Society
    • /
    • v.35 no.4
    • /
    • pp.1185-1192
    • /
    • 2020
  • In this paper, we introduce and investigate multi subspace-hypercyclic operators and prove that multi-hypercyclic operators are multi subspace-hypercyclic. We show that if T is M-hypercyclic or multi M-hypercyclic, then Tn is multi M-hypercyclic for any natural number n and by using this result, make some examples of multi subspace-hypercyclic operators. We prove that multi M-hypercyclic operators have somewhere dense orbits in M. We show that analytic Toeplitz operators can not be multi subspace-hypercyclic. Also, we state a sufficient condition for coanalytic Toeplitz operators to be multi subspace-hypercyclic.

Optimization of Random Subspace Ensemble for Bankruptcy Prediction (재무부실화 예측을 위한 랜덤 서브스페이스 앙상블 모형의 최적화)

  • Min, Sung-Hwan
    • Journal of Information Technology Services
    • /
    • v.14 no.4
    • /
    • pp.121-135
    • /
    • 2015
  • Ensemble classification is to utilize multiple classifiers instead of using a single classifier. Recently ensemble classifiers have attracted much attention in data mining community. Ensemble learning techniques has been proved to be very useful for improving the prediction accuracy. Bagging, boosting and random subspace are the most popular ensemble methods. In random subspace, each base classifier is trained on a randomly chosen feature subspace of the original feature space. The outputs of different base classifiers are aggregated together usually by a simple majority vote. In this study, we applied the random subspace method to the bankruptcy problem. Moreover, we proposed a method for optimizing the random subspace ensemble. The genetic algorithm was used to optimize classifier subset of random subspace ensemble for bankruptcy prediction. This paper applied the proposed genetic algorithm based random subspace ensemble model to the bankruptcy prediction problem using a real data set and compared it with other models. Experimental results showed the proposed model outperformed the other models.

Interference Suppression Using Principal Subspace Modification in Multichannel Wiener Filter and Its Application to Speech Recognition

  • Kim, Gi-Bak
    • ETRI Journal
    • /
    • v.32 no.6
    • /
    • pp.921-931
    • /
    • 2010
  • It has been shown that the principal subspace-based multichannel Wiener filter (MWF) provides better performance than the conventional MWF for suppressing interference in the case of a single target source. It can efficiently estimate the target speech component in the principal subspace which estimates the acoustic transfer function up to a scaling factor. However, as the input signal-to-interference ratio (SIR) becomes lower, larger errors are incurred in the estimation of the acoustic transfer function by the principal subspace method, degrading the performance in interference suppression. In order to alleviate this problem, a principal subspace modification method was proposed in previous work. The principal subspace modification reduces the estimation error of the acoustic transfer function vector at low SIRs. In this work, a frequency-band dependent interpolation technique is further employed for the principal subspace modification. The speech recognition test is also conducted using the Sphinx-4 system and demonstrates the practical usefulness of the proposed method as a front processing for the speech recognizer in a distant-talking and interferer-present environment.

Orthonormalized Forward Backward PAST (Projection Approximation Subspace Tracking) Algorithm (직교설 전후방 PAST (Projection Approximation Subspace Tracking) 알고리즘)

  • Lim, Jun-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.514-519
    • /
    • 2009
  • The projection approximation subspace tracking (PAST) is one of the attractive subspace tracking algorithms, because it estimates the signal subspace adaptively and continuously. Furthermore, the computational complexity is relatively low. However, the algorithm still has room for improvement in the subspace estimation accuracy. FE-PAST (Forward-Backward PAST) is one of the results from the improvement studies. In this paper, we propose a new algorithm to improve the orthogonality of the FB-PAST (Forward-Backward PAST).

ON SUBSPACE-SUPERCYCLIC SEMIGROUP

  • El Berrag, Mohammed;Tajmouati, Abdelaziz
    • Communications of the Korean Mathematical Society
    • /
    • v.33 no.1
    • /
    • pp.157-164
    • /
    • 2018
  • A $C_0$-semigroup ${\tau}=(T_t)_{t{\geq}0}$ on a Banach space X is called subspace-supercyclic for a subspace M, if $\mathbb{C}Orb({\tau},x){\bigcap}M=\{{\lambda}T_tx\;:\;{\lambda}{\in}\mathbb{C},\;t{\geq}0\}{\bigcap}M$ is dense in M for a vector $x{\in}M$. In this paper we characterize the notion of subspace-supercyclic $C_0$-semigroup. At the same time, we also provide a subspace-supercyclicity criterion $C_0$-semigroup and offer two equivalent conditions of this criterion.

Tutorial: Dimension reduction in regression with a notion of sufficiency

  • Yoo, Jae Keun
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.93-103
    • /
    • 2016
  • In the paper, we discuss dimension reduction of predictors ${\mathbf{X}}{\in}{{\mathbb{R}}^p}$ in a regression of $Y{\mid}{\mathbf{X}}$ with a notion of sufficiency that is called sufficient dimension reduction. In sufficient dimension reduction, the original predictors ${\mathbf{X}}$ are replaced by its lower-dimensional linear projection without loss of information on selected aspects of the conditional distribution. Depending on the aspects, the central subspace, the central mean subspace and the central $k^{th}$-moment subspace are defined and investigated as primary interests. Then the relationships among the three subspaces and the changes in the three subspaces for non-singular transformation of ${\mathbf{X}}$ are studied. We discuss the two conditions to guarantee the existence of the three subspaces that constrain the marginal distribution of ${\mathbf{X}}$ and the conditional distribution of $Y{\mid}{\mathbf{X}}$. A general approach to estimate them is also introduced along with an explanation for conditions commonly assumed in most sufficient dimension reduction methodologies.

Improving an Ensemble Model Using Instance Selection Method (사례 선택 기법을 활용한 앙상블 모형의 성능 개선)

  • Min, Sung-Hwan
    • Journal of the Society of Korea Industrial and Systems Engineering
    • /
    • v.39 no.1
    • /
    • pp.105-115
    • /
    • 2016
  • Ensemble classification involves combining individually trained classifiers to yield more accurate prediction, compared with individual models. Ensemble techniques are very useful for improving the generalization ability of classifiers. The random subspace ensemble technique is a simple but effective method for constructing ensemble classifiers; it involves randomly drawing some of the features from each classifier in the ensemble. The instance selection technique involves selecting critical instances while deleting and removing irrelevant and noisy instances from the original dataset. The instance selection and random subspace methods are both well known in the field of data mining and have proven to be very effective in many applications. However, few studies have focused on integrating the instance selection and random subspace methods. Therefore, this study proposed a new hybrid ensemble model that integrates instance selection and random subspace techniques using genetic algorithms (GAs) to improve the performance of a random subspace ensemble model. GAs are used to select optimal (or near optimal) instances, which are used as input data for the random subspace ensemble model. The proposed model was applied to both Kaggle credit data and corporate credit data, and the results were compared with those of other models to investigate performance in terms of classification accuracy, levels of diversity, and average classification rates of base classifiers in the ensemble. The experimental results demonstrated that the proposed model outperformed other models including the single model, the instance selection model, and the original random subspace ensemble model.

HEREDITARY PROPERTIES OF CERTAIN IDEALS OF COMPACT OPERATORS

  • Cho, Chong-Man;Lee, Eun-Joo
    • Bulletin of the Korean Mathematical Society
    • /
    • v.41 no.3
    • /
    • pp.457-464
    • /
    • 2004
  • Let X be a Banach space and Z a closed subspace of a Banach space Y. Denote by L(X, Y) the space of all bounded linear operators from X to Y and by K(X, Y) its subspace of compact linear operators. Using Hahn-Banach extension operators corresponding to ideal projections, we prove that if either $X^{**}$ or $Y^{*}$ has the Radon-Nikodym property and K(X, Y) is an M-ideal (resp. an HB-subspace) in L(X, Y), then K(X, Z) is also an M-ideal (resp. HB-subspace) in L(X, Z). If L(X, Y) has property SU instead of being an M-ideal in L(X, Y) in the above, then K(X, Z) also has property SU in L(X, Z). If X is a Banach space such that $X^{*}$ has the metric compact approximation property with adjoint operators, then M-ideal (resp. HB-subspace) property of K(X, Y) in L(X, Y) is inherited to K(X, Z) in L(X, Z).

Improved speech enhancement of multi-channel Wiener filter using adjustment of principal subspace vector (다채널 위너 필터의 주성분 부공간 벡터 보정을 통한 잡음 제거 성능 개선)

  • Kim, Gibak
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.490-496
    • /
    • 2020
  • We present a method to improve the performance of the multi-channel Wiener filter in noisy environment. To build subspace-based multi-channel Wiener filter, in the case of single target source, the target speech component can be effectively estimated in the principal subspace of speech correlation matrix. The speech correlation matrix can be estimated by subtracting noise correlation matrix from signal correlation matrix based on the assumption that the cross-correlation between speech and interfering noise is negligible compared with speech correlation. However, this assumption is not valid in the presence of strong interfering noise and significant error can be induced in the principal subspace accordingly. In this paper, we propose to adjust the principal subspace vector using speech presence probability and the steering vector for the desired speech source. The multi-channel speech presence probability is derived in the principal subspace and applied to adjust the principal subspace vector. Simulation results show that the proposed method improves the performance of multi-channel Wiener filter in noisy environment.