• Title/Summary/Keyword: measure-theoretic entropy

Search Result 9, Processing Time 0.055 seconds

Application of Information-theoretic Measure (Entropy) to Safety Assessment in Manufacturing Processes

  • Choi, Gi-Heung
    • International Journal of Safety
    • /
    • v.4 no.1
    • /
    • pp.8-13
    • /
    • 2005
  • Design of manufacturing process, in general, facilitates the creation of new process that may potentially harm the workers. Design of safety-guaranteed manufacturing process is, therefore, very important since it determines the ultimate outcomes of manufacturing activities involving safety of workers. This study discusses application of information-theoretic measure (entropy) to safety assessment of manufacturing processes. The idea is based on the general principles of design and their applications. Some examples are given.

ENTROPY OF NONAUTONOMOUS DYNAMICAL SYSTEMS

  • Zhu, Yujun;Liu, Zhaofeng;Xu, Xueli;Zhang, Wenda
    • Journal of the Korean Mathematical Society
    • /
    • v.49 no.1
    • /
    • pp.165-185
    • /
    • 2012
  • In this paper, the topological entropy and measure-theoretic entropy for nonautonomous dynamical systems are studied. Some properties of these entropies are given and the relation between them is discussed. Moreover, the bounds of them for several particular nonautonomous systems, such as affine transformations on metrizable groups (especially on the torus) and smooth maps on Riemannian manifolds, are obtained.

On Information Theoretic Index for Measuring the Stochastic Dependence Among Sets of Variates

  • Kim, Hea-Jung
    • Journal of the Korean Statistical Society
    • /
    • v.26 no.1
    • /
    • pp.131-146
    • /
    • 1997
  • In this paper the problem of measuring the stochastic dependence among sets fo random variates is considered, and attention is specifically directed to forming a single well-defined measure of the dependence among sets of normal variates. A new information theoretic measure of the dependence called dependence index (DI) is introduced and its several properties are studied. The development of DI is based on the generalization and normalization of the mutual information introduced by Kullback(1968). For data analysis, minimum cross entropy estimator of DI is suggested, and its asymptotic distribution is obtained for testing the existence of the dependence. Monte Carlo simulations demonstrate the performance of the estimator, and show that is is useful not only for evaluation of the dependence, but also for independent model testing.

  • PDF

Application of Discrimination Information (Cross Entropy) as Information-theoretic Measure to Safety Assessment in Manufacturing Processes

  • Choi, Gi-Heung;Ryu, Boo-Hyung
    • International Journal of Safety
    • /
    • v.4 no.2
    • /
    • pp.1-5
    • /
    • 2005
  • Design of manufacturing process, in general, facilitates the creation of new process that may potentially harm the workers. Design of safety-guaranteed manufacturing process is, therefore, very important since it determines the ultimate outcomes of manufacturing activities involving safety of workers. This study discusses application of discrimination information (cross entropy) to safety assessment of manufacturing processes. The idea is based on the general principles of design and their applications. An example of Cartesian robotic movement is given.

Minimum Variance Unbiased Estimation for the Maximum Entropy of the Transformed Inverse Gaussian Random Variable by Y=X-1/2

  • Choi, Byung-Jin
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.3
    • /
    • pp.657-667
    • /
    • 2006
  • The concept of entropy, introduced in communication theory by Shannon (1948) as a measure of uncertainty, is of prime interest in information-theoretic statistics. This paper considers the minimum variance unbiased estimation for the maximum entropy of the transformed inverse Gaussian random variable by $Y=X^{-1/2}$. The properties of the derived UMVU estimator is investigated.

Information Theoretic Standardized Logistic Regression Coefficients with Various Coefficients of Determination

  • Hong Chong-Sun;Ryu Hyeon-Sang
    • Communications for Statistical Applications and Methods
    • /
    • v.13 no.1
    • /
    • pp.49-60
    • /
    • 2006
  • There are six approaches to constructing standardized coefficient for logistic regression. The standardized coefficient based on Kruskal's information theory is known to be the best from a conceptual standpoint. In order to calculate this standardized coefficient, the coefficient of determination based on entropy loss is used among many kinds of coefficients of determination for logistic regression. In this paper, this standardized coefficient is obtained by using four kinds of coefficients of determination which have the most intuitively reasonable interpretation as a proportional reduction in error measure for logistic regression. These four kinds of the sixth standardized coefficient are compared with other kinds of standardized coefficients.

The Study on Information-Theoretic Measures of Incomplete Information based on Rough Sets (러프 집합에 기반한 불완전 정보의 정보 이론적 척도에 관한 연구)

  • 김국보;정구범;박경옥
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.5
    • /
    • pp.550-556
    • /
    • 2000
  • This paper comes to derive optimal decision rule from incomplete information using the concept of indiscernibility relation and approximation space in Rough set. As there may be some errors in case that processing information contains multiple or missing data, the method of removing or minimizing these data is required. Entropy which is used to measure uncertainty or quantity in information processing field is utilized to remove the incomplete information of rough relation database. But this paper does not always deal with the information system which may be contained incomplete information. This paper is proposed object relation entropy and attribute relation entropy using Rough set as information theoretical measures in order to remove the incomplete information which may contain condition attribute and decision attribute of information system.

  • PDF

A Theoretic Approach to the Organic Food Market in Korea: An Estimation of Information Entropy as a Measure of Information Asymmetry for Credence Goods (우리나라 친환경농산물 시장에 대한 정보이론적 접근 : 신뢰재의 정보비대칭성 지표로서의 정보엔트로피 측정)

  • Song, Yang-Hoon
    • Journal of Environmental Policy
    • /
    • v.7 no.3
    • /
    • pp.41-61
    • /
    • 2008
  • Although the size of the organic food market in Korea has increased significantly, its further development is hampered by the information asymmetry between the producers and consumers of organic food. It isn't just about revitalizing the market; it's also about Korean farmers surviving an era of trade liberalization. In order to produce more value-added products, the information asymmetry issue has to be resolved regarding the organic food market and other agricultural credence goods such as Han-woo(Korean beef). Therefore, measuring information asymmetry has become a central issue. One way to measure asymmetry is to use Game Theory. However, in practice, estimating payoffs at the industry level is hard to accomplish, and even when it is possible, the reliability of the estimated payoffs is not guaranteed. As an alternative, the concept of Information Entropy(disorder level of information), developed by Shannon(1948), was used in this study. It is proposed that this measure should be used when assessing the level of information asymmetry in the Korean organic food market. Using recent data, it was found that information entropy in the Korean organic food market has been decreasing constantly since 2003. Therefore, it was proposed that measures should be adopted by the government to improve the certification system of organic food.

  • PDF

The Generation of Control Rules for Data Mining (데이터 마이닝을 위한 제어규칙의 생성)

  • Park, In-Kyoo
    • Journal of Digital Convergence
    • /
    • v.11 no.11
    • /
    • pp.343-349
    • /
    • 2013
  • Rough set theory comes to derive optimal rules through the effective selection of features from the redundancy of lots of information in data mining using the concept of equivalence relation and approximation space in rough set. The reduction of attributes is one of the most important parts in its applications of rough set. This paper purports to define a information-theoretic measure for determining the most important attribute within the association of attributes using rough entropy. The proposed method generates the effective reduct set and formulates the core of the attribute set through the elimination of the redundant attributes. Subsequently, the control rules are generated with a subset of feature which retain the accuracy of the original features through the reduction.