• Title, Summary, Keyword: weighted distribution

Search Result 509, Processing Time 0.031 seconds

Parameter Estimation and Confidence Limits for the Log-Gumbel Distribution (대수(對數)-Gumbel 확률분포함수(確率分布函數)의 매개변수(媒介變數) 추정(推定)과 신뢰한계(信賴限界) 유도(誘導))

  • Heo, Jun Haeng
    • Journal of The Korean Society of Civil Engineers
    • /
    • v.13 no.4
    • /
    • pp.151-161
    • /
    • 1993
  • The log-Gumbel distribution in real space is defined by transforming the conventional log-Gumbel distribution in log space. For this model, the parameter estimation techniques are applied based on the methods of moments, maximum likelihood and probability weighted moments. The asymptotic variances of estimator of the quantiles for each estimation method are derived to find the confidence limits for a given return period. Finally, the log-Gumbel model is applied to actual flood data to estimate the parameters, quantiles and confidence limits.

  • PDF

Parameter Estimation and Confidence Limits for the WeibulI Distribution (Weibull 확률분포함수(確率分布函數)의 매개변수(媒介變數) 추정(推定)과 신뢰한계(信賴限界) 유도(誘導))

  • Heo, Jun Haeng
    • Journal of The Korean Society of Civil Engineers
    • /
    • v.13 no.4
    • /
    • pp.141-150
    • /
    • 1993
  • For the three parameter Weibull distribution, the parameter estimation techniques are applied and the asymptotic variances of the quantile to obtain the confidence limits for a given return period are derived. Three estimation techniques are used for these purposes: the methods of moments, maximum likelihood and probability weighted moments. The three parameter Weibull distribution as a flood frequency model is applied to actual flood data.

  • PDF

Application of Bayesian Computational Techniques in Estimation of Posterior Distributional Properties of Lognormal Distribution

  • Begum, Mun-Ni;Ali, M. Masoom
    • Journal of the Korean Data and Information Science Society
    • /
    • v.15 no.1
    • /
    • pp.227-237
    • /
    • 2004
  • In this paper we presented a Bayesian inference approach for estimating the location and scale parameters of the lognormal distribution using iterative Gibbs sampling algorithm. We also presented estimation of location parameter by two non iterative methods, importance sampling and weighted bootstrap assuming scale parameter as known. The estimates by non iterative techniques do not depend on the specification of hyper parameters which is optimal from the Bayesian point of view. The estimates obtained by more sophisticated Gibbs sampler vary slightly with the choices of hyper parameters. The objective of this paper is to illustrate these tools in a simpler setup which may be essential in more complicated situations.

  • PDF

Weighted Latin Hypercube Sampling to Estimate Clearance-to-stop for Probabilistic Design of Seismically Isolated Structures in Nuclear Power Plants

  • Han, Minsoo;Hong, Kee-Jeung;Cho, Sung-Gook
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.63-75
    • /
    • 2018
  • This paper proposes extension of Latin Hypercube Sampling (LHS) to avoid the necessity of using intervals with the same probability area where intervals with different probability areas are used. This method is called Weighted Latin Hypercube Sampling (WLHS). This paper describes equations and detail procedure necessary to apply weight function to WLHS. WLHS is verified through numerical examples by comparing the estimated distribution parameters with those from other methods such as Random Sampling and Latin Hypercube Sampling. WLHS provides more flexible way on selecting samples than LHS. Accuracy of WLHS estimation on distribution parameters is depending on the selection of weight function. The proposed WLHS is applied to seismically isolated structures in nuclear power plants. In this application, clearance-to-stops (CSs) calculated using LHS proposed by Huang et al. [1] and WLHS proposed in this paper, respectively, are compared to investigate the effect of choosing different sampling techniques.

Topic Extraction and Classification Method Based on Comment Sets

  • Tan, Xiaodong
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.329-342
    • /
    • 2020
  • In recent years, emotional text classification is one of the essential research contents in the field of natural language processing. It has been widely used in the sentiment analysis of commodities like hotels, and other commentary corpus. This paper proposes an improved W-LDA (weighted latent Dirichlet allocation) topic model to improve the shortcomings of traditional LDA topic models. In the process of the topic of word sampling and its word distribution expectation calculation of the Gibbs of the W-LDA topic model. An average weighted value is adopted to avoid topic-related words from being submerged by high-frequency words, to improve the distinction of the topic. It further integrates the highest classification of the algorithm of support vector machine based on the extracted high-quality document-topic distribution and topic-word vectors. Finally, an efficient integration method is constructed for the analysis and extraction of emotional words, topic distribution calculations, and sentiment classification. Through tests on real teaching evaluation data and test set of public comment set, the results show that the method proposed in the paper has distinct advantages compared with other two typical algorithms in terms of subject differentiation, classification precision, and F1-measure.

Supplier Selection using DEA-AHP Method in Steel Distribution Industry (DEA AHP 모형을 통한 철강유통산업에서의 공급업체 선정)

  • Park, Jinkyu;Kim, Pansoo
    • Journal of the Society of Korea Industrial and Systems Engineering
    • /
    • v.40 no.2
    • /
    • pp.51-59
    • /
    • 2017
  • Due to the rapid change of global business environment, the growth of China's steel industry and the inflow of cheap products, domestic steel industry is faced on downward trend. The change of business paradigms from a quantitative growth to a qualitative product is needed in this steel industry. In this environment, it is very important for domestic steel distribution companies to secure their competitiveness by selecting good supply companies through a efficient procurement strategy and effective method. This study tried to find out the success factors of steel distribution industry based on survey research from experts. Weighted values of each factors were found by using AHP (analytic hierarchy process) analysis. The weighted values were applied to DEA(data envelopment analysis) model and eventually the best steel supply company were selected. This paper used 29 domestic steel distribution firms for case example and 5 steps of decision process to select good vendors were suggested. This study used quality, price, delivery and finance as a selection criteria. Using this four criterions, nine variable were suggested. Which were product diversity, base price, discount, payment position, average delivery date, urgency order responsibility and financial condition. These variables were used as a output variable of DEA. Sales and facilities were used as an input variable. Pairwise comparison was conducted using these variables. The weighted value calculated by AHP pairwise comparison were used for DEA analysis. Through the analysis of DEA efficiency process, good DMU (decision making unit) were recommended as a steel supply company. The domestic case example was used to show the effectiveness of this study.

Exploring Stock Market Variables and Weighted Market Price Index: The Case of Jordan

  • ALADWAN, Mohammad;ALMAHARMEH, Mohammad;ALSINGLAWI, Omar
    • The Journal of Asian Finance, Economics and Business
    • /
    • v.8 no.3
    • /
    • pp.977-985
    • /
    • 2021
  • The main aim of the study is to provide empirical evidence about the association between stock market exchange data and weighted price index. This research utilized monthly reported data from the Amman stock exchange market (ASE) and the Central Bank of Jordan (CBJ). The weighted price index was employed as the dependent variable and the independent variables were weighted price index (WPI), turnover ratio (TOR), number of trading days (NTD), price-earnings ratio (PER), and dividends yield ratio (DY). The time period of the study was from January 2015 to October 2020. The study's methodology follows a quantitative approach using the multiple regression method to test the hypotheses of the study. The final results of the study provided conclusive evidence that the market-weighted price index is strongly and positively correlated to three predetermined variables, namely; turnover ratio, price-earnings ratio, and dividend yield but no evidence was obtained for the effect of the number of trading days. The finding of the current study proved that the market price index is not only influenced by macro factors, but also by other variables assumed to not beneficial for the judgment of price index movements.

Evaluation of Non - Normal Process Capability by Johnson System (존슨 시스템에 의한 비정규 공정능력의 평가)

  • 김진수;김홍준
    • Journal of the Korea Safety Management & Science
    • /
    • v.3 no.3
    • /
    • pp.175-190
    • /
    • 2001
  • We propose, a new process capability index $C_{psk}$(WV) applying the weighted variance control charting method for non-normally distributed. The main idea of the weighted variance method(WVM) is to divide a skewed or asymmetric distribution into two normal distributions from its mean to create two new distributions which have the same mean but different standard deviations. In this paper we propose an example, a distributions generated from the Johnson family of distributions, to demonstrate how the weighted variance-based process capability indices perform in comparison with another two non-normal methods, namely the Clements and the Wright methods. This example shows that the weighted valiance-based indices are more consistent than the other two methods in terms of sensitivity to departure to the process mean/median from the target value for non-normal processes. Second method show using the percentage nonconforming by the Pearson, Johnson and Burr systems. This example shows a little difference between the Pearson system and Burr system, but Johnson system underestimated than the two systems for process capability.

  • PDF

Geographically Weighted Regression on the Environmental-Ecological Factors of Human Longevity (장수의 환경생태학적 요인에 관한 지리가중회귀분석)

  • Choi, Don Jeong;Suh, Yong Cheol
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.3
    • /
    • pp.57-63
    • /
    • 2012
  • The ordinary least square (OLS) regression model is assumed that the relationship between distribution of longevity population and environmental factors to be identical. Therefore, the OLS regression analysis can't explain sufficiently the spatial characteristics of longevity phenomenon and related variables. The geographically weighted regression (GWR) model can be representing the spatial relationship of adjacent area using geographically weighted function. It also characterized which can locally explain the spatial variation of distribution of longevity population by environmental characteristics. From this point of view, this study was performed the comparative analysis between OLS and GWR model for ecological factors of longevity existing studies. In the results, GWR model has higher corresponded to model than OLS model and can be accounting for spatial variability about effect of specific environmental variables.

Extracting Minimized Feature Input And Fuzzy Rules Using A Fuzzy Neural Network And Non-Overlap Area Distribution Measurement Method (퍼지신경망과 비중복면적 분산 측정법을 이용한 최소의 특징입력 및 퍼지규칙의 추출)

  • Lim Joon-Shik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.5
    • /
    • pp.599-604
    • /
    • 2005
  • This paper presents fuzzy rules to predict diagnosis of Wisconsin breast cancer with minimized number of feature in put using the neural network with weighted fuzzy membership functions (NEWFM) and the non-overlap area distribution measurement method. NEWFM is capable of self-adapting weighted membership functions from the given the Wisconsin breast cancer clinical training data. n set of small, medium, and large weighted triangular membership functions in a hyperbox are used for representing n set of featured input. The membership functions are randomly distributed and weighted initially, and then their positions and weights are adjusted during learning. After learning, prediction rules are extracted directly from n set of enhanced bounded sums of n set of small, medium, and large weighted fuzzy membership functions. Then, the non-overlap area distribution measurement method is applied to select important features by deleting less important features. Two sets of prediction rules extracted from NEWFM using the selected 4 input features out of 9 features outperform to the current published results in number of set of rules, number of input features, and accuracy with 99.71%.