• 제목/요약/키워드: random graphs

검색결과 50건 처리시간 0.031초

Measurement of graphs similarity using graph centralities

  • Cho, Tae-Soo;Han, Chi-Geun;Lee, Sang-Hoon
    • 한국컴퓨터정보학회논문지
    • /
    • 제23권12호
    • /
    • pp.57-64
    • /
    • 2018
  • In this paper, a method to measure similarity between two graphs is proposed, which is based on centralities of the graphs. The similarity between two graphs $G_1$ and $G_2$ is defined by the difference of distance($G_1$, $G_{R_1}$) and distance($G_2$, $G_{R_2}$), where $G_{R_1}$ and $G_{R_2}$ are set of random graphs that have the same number of nodes and edges as $G_1$ and $G_2$, respectively. Each distance ($G_*$, $G_{R_*}$) is obtained by comparing centralities of $G_*$ and $G_{R_*}$. Through the computational experiments, we show that it is possible to compare graphs regardless of the number of vertices or edges of the graphs. Also, it is possible to identify and classify the properties of the graphs by measuring and comparing similarities between two graphs.

HAMILTONIANS IN STEINHAUS GRAPHS

  • Lim, Dae-Keun;Kim, Hye-Kyung
    • 대한수학회논문집
    • /
    • 제11권4호
    • /
    • pp.1137-1145
    • /
    • 1996
  • A Steinhaus graph is a labelled graph whose adjacency matrix $A = (a_{i,j})$ has the Steinhaus property : $a_{i,j} + a{i,j+1} \equiv a_{i+1,j+1} (mod 2)$. We consider random Steinhaus graphs with n labelled vertices in which edges are chosen independently and with probability $\frac{1}{2}$. We prove that almost all Steinhaus graphs are Hamiltonian like as in random graph theory.

  • PDF

Generation of Finite Inductive, Pseudo Random, Binary Sequences

  • Fisher, Paul;Aljohani, Nawaf;Baek, Jinsuk
    • Journal of Information Processing Systems
    • /
    • 제13권6호
    • /
    • pp.1554-1574
    • /
    • 2017
  • This paper introduces a new type of determining factor for Pseudo Random Strings (PRS). This classification depends upon a mathematical property called Finite Induction (FI). FI is similar to a Markov Model in that it presents a model of the sequence under consideration and determines the generating rules for this sequence. If these rules obey certain criteria, then we call the sequence generating these rules FI a PRS. We also consider the relationship of these kinds of PRS's to Good/deBruijn graphs and Linear Feedback Shift Registers (LFSR). We show that binary sequences from these special graphs have the FI property. We also show how such FI PRS's can be generated without consideration of the Hamiltonian cycles of the Good/deBruijn graphs. The FI PRS's also have maximum Shannon entropy, while sequences from LFSR's do not, nor are such sequences FI random.

The Classification of random graph models using graph centralities

  • Cho, Tae-Soo;Han, Chi-Geun;Lee, Sang-Hoon
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권7호
    • /
    • pp.61-69
    • /
    • 2019
  • In this paper, a classification method of random graph models is proposed and it is based on centralities of the random graphs. Similarity between two random graphs is measured for the classification of random graph models. The similarity between two random graph models $G^{R_1}$ and $G^{R_2}$ is defined by the distance of $G^{R_1}$ and $G^{R_2}$, where $G^{R_2}$ is a set of random graph $G^{R_2}=\{G_1^{R_2},...,G_p^{R_2}\}$ that have the same number of nodes and edges as random graph $G^{R_1}$. The distance($G^{R_1},G^{R_2}$) is obtained by comparing centralities of $G^{R_1}$ and $G^{R_2}$. Through the computational experiments, we show that it is possible to compare random graph models regardless of the number of vertices or edges of the random graphs. Also, it is possible to identify and classify the properties of the random graph models by measuring and comparing similarities between random graph models.

계층적 속성 랜덤 그래프의 정의 및 이를 이용한 여러 응용들의 소개 (Definition of hierarchical attributed random graph and proposal of its applications)

  • 성동수
    • 전자공학회논문지C
    • /
    • 제34C권8호
    • /
    • pp.79-87
    • /
    • 1997
  • For the representation of a complex object, the object is decomposed into several parts, and it is described by these decomposed parts and their relations. In genral, the parts can be the primitive elements that can not be decomposed further, or can be decomposed into their subparts. Therefore, the hierarchical description method is very natural and it si represented by a hierarchical attributed graph whose vertieces represent either primitive elements or graphs. This graphs also have verties which contain primitive elements or graphs. When some uncertainty exists in the hierarchical description of a complex object either due to noise or minor deformation, a probabilistic description of the object ensemble is necessary. For this purpose, in this paper, we formally define the hierarchical attributed random graph which is extention of the hierarchical random graph, and erive the equations for the entropy calculation of the hierarchical attributed random graph, and derive the equations for the entropy calculation of the hierarchical attributed random graph. Finally, we propose the application areas to use these concepts.

  • PDF

Visualizing Multi-Variable Prediction Functions by Segmented k-CPG's

  • Huh, Myung-Hoe
    • Communications for Statistical Applications and Methods
    • /
    • 제16권1호
    • /
    • pp.185-193
    • /
    • 2009
  • Machine learning methods such as support vector machines and random forests yield nonparametric prediction functions of the form y = $f(x_1,{\ldots},x_p)$. As a sequel to the previous article (Huh and Lee, 2008) for visualizing nonparametric functions, I propose more sensible graphs for visualizing y = $f(x_1,{\ldots},x_p)$ herein which has two clear advantages over the previous simple graphs. New graphs will show a small number of prototype curves of $f(x_1,{\ldots},x_{j-1},x_j,x_{j+1}{\ldots},x_p)$, revealing statistically plausible portion over the interval of $x_j$ which changes with ($x_1,{\ldots},x_{j-1},x_{j+1},{\ldots},x_p$). To complement the visual display, matching importance measures for each of p predictor variables are produced. The proposed graphs and importance measures are validated in simulated settings and demonstrated for an environmental study.

Simple Graphs for Complex Prediction Functions

  • Huh, Myung-Hoe;Lee, Yong-Goo
    • Communications for Statistical Applications and Methods
    • /
    • 제15권3호
    • /
    • pp.343-351
    • /
    • 2008
  • By supervised learning with p predictors, we frequently obtain a prediction function of the form $y\;=\;f(x_1,...,x_p)$. When $p\;{\geq}\;3$, it is not easy to understand the inner structure of f, except for the case the function is formulated as additive. In this study, we propose to use p simple graphs for visual understanding of complex prediction functions produced by several supervised learning engines such as LOESS, neural networks, support vector machines and random forests.

A NOTE ON CONNECTEDNESS OF QUASI-RANDOM GRAPHS

  • Lee, Chang-Woo
    • 대한수학회논문집
    • /
    • 제14권2호
    • /
    • pp.295-299
    • /
    • 1999
  • Every quasi-random graph G(n) on n vertices consists of a giant component plus o(n) vertices, and every quasi-random graph G(n) with minimum degree (1+o(1))\ulcorner is connected.

  • PDF

HAMILTONICITY OF QUASI-RANDOM GRAPHS

  • Lee, Tae Keug;Lee, Changwoo
    • Korean Journal of Mathematics
    • /
    • 제10권1호
    • /
    • pp.29-35
    • /
    • 2002
  • It is well known that a random graph $G_{1/2}(n)$ is Hamiltonian almost surely. In this paper, we show that every quasirandom graph $G(n)$ with minimum degree $(1+o(1))n/2$ is also Hamiltonian.

  • PDF

DIAMETERS AND CLIQUE NUMBERS OF QUASI-RANDOM GRAPHS

  • Lee, Tae Keug;Lee, Changwoo
    • Korean Journal of Mathematics
    • /
    • 제11권1호
    • /
    • pp.65-70
    • /
    • 2003
  • We show that every quasi-random graph $G(n)$ with $n$ vertices and minimum degree $(1+o(1))n/2$ has diameter either 2 or 3 and that every quasi-random graph $G(n)$ with n vertices has a clique number of $o(n)$ with wide spread.

  • PDF