DOI QR코드

DOI QR Code

Exact Decoding Probability of Random Linear Network Coding for Tree Networks

  • Li, Fang (State Key Laboratory of Integrated Services Networks (ISN), Xidian University) ;
  • Xie, Min (State Key Laboratory of Integrated Services Networks (ISN), Xidian University)
  • Received : 2014.09.15
  • Accepted : 2015.01.02
  • Published : 2015.02.28

Abstract

The hierarchical structure in networks is widely applied in many practical scenarios especially in some emergency cases. In this paper, we focus on a tree network with and without packet loss where one source sends data to n destinations, through m relay nodes employing random linear network coding (RLNC) over a Galois field in parallel transmission systems. We derive closed-form probability expressions of successful decoding at a destination node and at all destination nodes in this multicast scenario. For the convenience of computing, we also propose an upper bound for the failure probability. We then investigate the impact of the major parameters, i.e., the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability, on the decoding performance with simulation results. In addition, numerical results show that, under a fixed exact decoding probability, the required field size can be minimized. When failure decoding probabilities are given, the operation is simple and its complexity is low in a small finite field.

Keywords

1. Introduction

Network coding (NC) has been shown to offer advantages in throughput, robustness, power consumption, and security in both wireline [1] and wireless networks [2], by allowing intermediate nodes to mix the information before forwarding it. Their work [1] was regarded as the commencement of research of NC. Li et al. [3] showed that linear network coding (LNC) can achieve the optimal throughput for a multicast transmission.

Later, Ho et al. [4] proposed that an effective, yet simple, approach is random linear network coding (RLNC). They also analyzed the performance by deriving upper bounds for the failure of this code. Furthermore, Huseyin et al. [5] defined two types of failure probabilities to characterize the performance analysis of RLNC and improved bounds in [4]. Their bounds are described by the number of internal nodes, instead of the number of channels or similar quantities.

Typically, the exact decoding probability under RLNC is derived in a fixed topology [6]- [8]. Trullols-Cruces et al. [6] computed the exact probability that a receiver obtains N linearly independent packets among K ≥ N received packets, when the senders use RLNC over a Galois field. This problem is equivalent to computing the probability that a N × K matrix has rank N , where each element is randomly selected from a Galois field with equal probability. Deriving the probability that this matrix has full rank in [7] can be viewed as a special case of Th. 1 in our work.

In this paper, we consider that the source sends information to all sinks at the theoretically maximum rate with RLNC for tree networks. From the network topological perspective, hierarchical structures can be reduced to tree-type networks. In addition, tree structured networks are scalable to expand coverage by increasing the depth of tree networks due to their flexibility and low delay, making it suitable for a variety of emergency applications. Such topology is widely used and is a basic building block of more complex multi-hop networks [9]-[13]. Because the topology of tree networks with packet loss is dynamic, RLNC allows higher data rates than routing and the operation is simple.

The main contributions of this paper are described as follows. Firstly, we derive the exact probability of RLNC over a Galois field for tree networks with and without packet loss. Secondly, for the convenience of computing, we propose upper bounds for the failure probability and improve the bound in [5] (Theorem 2) by replacing the number

by the number m+1 for tree networks to decrease the complexity. Here m is the number of internal nodes, δ is equal to k - ω , k is the min-cut from s to all sinks t∈T , and ω is the number of symbols generated at the source. Finally, we illustrate that, the impact of major parameters, i.e., the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability, on the decoding performance with simulation results. Numerical results also show that, under a fixed decoding probability, a required field size can be minimized.

The remainder of this paper is organized as follows. Section II introduces the definition of RLNC and presents the network model of tree networks. Section III derives closed-form probability expressions of the defined model, proves main results and describes numerical results. Section IV provides concluding remarks.

 

2. Basic Definitions and the Network Model

We consider a multi-hop relay tree network composed of one source, m internal nodes and n sink nodes, i.e., T = {t1,⋯,tn} in parallel transmission system as shown in Fig. 1. There are k multiple channels between two nodes. So, k is the min-cut from s to all sinks t∈T . Every channel can transmit one field symbol per unit time. Further, denote the channel leading from node i to node j by the direct edge e = (i, j), where node i is called the tail of e and node j is called the head of e . For each node i , let Out(i) be the set of edges leaving node i and let In(i) be the set of edges entering node i .

Fig. 1.Network model.

Source messages = (x1,...,xω) arranged in a row vector over a Galois field Fq of size q are transmitted to the source s through ω imaginary channels in In(s) . Note that all of the arithmetic operations are performed over this field. Each of the m relays encodes the received packets using RLNC, and it sends the resulting packet toward its child nodes. Even if nodes in Fig. 1 join or leave, preexisting nodes do not need to change their coding operations to preserve the same guarantee of correctness. Ue is the message transmitted over the channel e . The coding process at each node i is represented as follows:

where kde is called local encoding kernel or coefficient. The coefficient used to encode the data packets is randomly extracted from a field Fq , with equal probability.

As we know from [14], [15], the global kernel fe for a channel e is a column vector such that

The global kernels can be determined by the local kernels inductively by the next formula

Therefore, we denote by Ft the ω×k matrix containing global kernels for channels in In(t) , i.e.,

where Ft is called the decoding matrix at sink t .

To simplify the discussion, we fix a sink node t ∈T . Without loss of generality, let

be an upstream to downstream order of nodes. The minimum cut capacity between the source node s and the sink node t is k , where k > ω . Denote rt be the number of internal nodes in disjoint paths P(t) from s to t . Furthermore, since the Menger's Theorem, there exit k disjoint paths from s to t denoted by P(t) = { : 1 ≤ j ≤ k}. Let be the set of all channels in these k paths. Let be the sequence of channels in the path satisfying

In the following we use the concept of cuts from s to t represented in [5]. Intuitively, the first cut is CUT0(t) = {d1,⋯,dω} and the second cut is CUT1(t) = { :1 ≤ i ≤ k} . At node i1 , the next cut is defined from CUT1(t) by replacing channels in In(i1) ∩ CUT1(t) by their respective next channels in the path. Subsequently, once CUTj(t) is defined, CUTj+1(t) is formed from [5] by the same method as above. By induction, all cuts CUTj(t) for j = 0,...,m+1 can be defined. Furthermore, denote (t) = In(ij) ∩ CUTj(t) and (t) = CUTj(t) - (t) .

Before discussion, we immediately introduce the definition of successful decoding probabilities as follows. For convenience, let be the linear space spanned by the global encoding kernel in {fe : e ∈ In(t)} .

Definition 1: The successful probability of random linear network coding at sink t is

and the successful probability of random linear network coding at all sink nodes is

As shown in [3] and [5], we have the following result for the network. There exist network codes such that the messages can be decoded successfully at sink t , if and only if any one of the following statements is true:

1) dim() = ω ;

2) there exist ω linearly independent global encoding kernel vectors in {fd : d ∈ In(t)} ;

3) Rank(Ft) = ω .

In addition, main notations used in section 3 are as follows:

 

3. Results

3.1 Exact Decoding Probabilities

We have known that the performance analysis of RLNC plays an important part in theory and application. In general, it is difficult to compute the exact decoding probability of RLNC for a general communication network because the topology of a general communication network is unfixed. In this section, we calculate the exact decoding probability of RLNC for tree networks. At first, we give the following theorem.

Theorem 1. Let L be an n-dimensional linear space over finite field Fq , L0 and L1 be two subspaces of dimensional k0 and k1 in space L , respectively, and = L. Let l1,...,ln-k0+m=r(m≥1) be r uniformly distributed random vectors taking values in L .

Then

Proof. To form the linear space L , we start with L0 , then add vectors in l1,...,lr to the subspace L0 one by one. During the process of adding these vectors, if any vector li falls in the space

a failure or a redundancy occurs, where B0 = L0 . Under the condition that li ∉ Bi-1 for 1 ≤ j ≤ i-1, we have

We prove this theorem by induction on m=r-n+k0 , the index of the number of redundancies in L1 .

For m=1, we have r=n-k0+1 . Note that if a failure occurs firstly in the jth selection, there must not be any failure in the following sequence. If such failure occurs, with probability

we will leave exactly n-k0 linear independent vectors. In the derivation, we obtain the probability that n-k0 linear independent vectors are picked, given r=n-k0+1 selections, as

By the above iterating method, we have proved it for m=1 and obtain the probability for m>1

From the above equation, we can get the formula

Assume that the result of the theorem is proved for 2,⋯,m , we now prove it for m+1, we have

where and Eq. ( a ) can be obtained from Eq. (10).

The theorem is proved.

Remark 1. We can observe that pr(dim(=n)) is not related to the dimension of L1 .We also notice that when k0 = 0 , we get the following corollary. This corollary can be directly proved by the method in combinatory in [16], too.

Corollary 2. Let L be an n - dimensional linear space over finite field Fq . Let l1,...,lr be r uniformly distributed random vectors taking values in L . Then

Applying this corollary and the definition of CUTj , we compute exact decoding probability at a destination node and at all destination nodes in tree networks. In order to decode, a destination node has to collect as many linearly independent vectors as the number of packets that were originally mixed in, and then solve the resulting system of linear equations. The decoding probability thus depends on the coding design and the selected coefficients. The coefficients used to encode the data packets are extracted from a Galois field of size q.

Theorem 3. For tree networks as shown in Fig. 1, let the minimum cut capacity for sink node t∈T be k and let the information rate be ω symbols per unit time. Local encoding coefficients are chosen independently and uniformly from the finite field Fq .

Proof. Given in Appendix A.

In particular, we denote r = max{rt:t∈T} . The following corollary can be obtained immediately.

Corollary 4: For tree networks as shown in Fig. 1, the failure decoding probability of RLNC at sink t satisfies:

We have described that sometimes it is difficult to use predesigned LNC based on the network topology even if the topology is known. When nothing about the topology of tree networks except the number of internal numbers m are known, we analyze the performance of RLNC in the next corollary.

Corollary 5. For tree networks as shown in Fig. 1,

We can notice that we improve the bound in [5] (theorem 2) by replacing the number

by the number m+1. As above mentioned, our analyses show that the failure probability is a function of m , δ and q .

In practice, due to various reasons, packet loss may occur. Unlike previous work, we consider that the relay nodes may not receive some packets due to link failures. This implies that a destination may be unable to successfully decode the received data packets due to both missing packets at the relays and linearly dependent coefficient vectors. In general, channel failure is a low probability event, that is, 0 ≤ p << 1/ 2 . In particular, when p = 0 , the following theorem is equivalent to Theorem 3.

Theorem 6. For tree networks as shown in Fig. 1 with channel failure probability p ,

Proof. Given in Appendix B. □

3.2 Numerical Results

We now show the impact of major parameters on the decoding performance. In the following, we fix the number of symbols generated at the source ω = 5.

As discussed above, our analysis concerns the decoding probability at a (all) sink node(s) without packet loss, as a function of the size of finite field q , (the number of sink nodes n ,) and the number of internal nodes m in Th. 3. Typically, the exact probability is limited to the linear independence of the random vectors employed for the encoding of received packets.

Firstly, by a comparison of Fig. 2.(a) and Fig. 2.(b), we observe that the number of sink nodes n plays an important role in analysing the decoding performance. Under other parameters are fixed, the smaller the number n is in a tree network, the better performance it has. Secondly, represented by different continuous and dashed lines, Fig. 2 shows the exact decoding probability as a function of the size of finite field q , when δ is fixed. Clearly, as q increases, the performance improves greatly. We also observe that for q = 16 both of two successful probabilities are larger than 0.9 even in the presence of a small number of redundancy. Finally, represented by each continuous or dashed line, Fig. 2 shows the probability as a function of δ ,when q is determined. Obviously, as δ increases to 2 , the performance improves dramatically. We can obtain that δ has a significant impact on the decoding performance.

Fig. 2.(a) shows the simulation results and the exact decoding probability at sink t obtained over 105 runs when ω = 5 and rt = 3 . (b) shows the simulation results and the exact decoding probability at all sinks obtained over 105 runs when ω = 5, m = 10 and n = 5 .

The main difference between Th. 3 and Th. 6 is that packet loss may occur in the latter. We now show the impact of major parameters, i.e., the size of finite fields, the number of internal nodes m , the number of sink nodes n and the channel failure probability p , on the decoding performance at sink t and at all sinks with channel failure probability in Th. 6 .

A comparison of Fig. 3.(a) and Fig. 3.(b) shows that the number of sink nodes n has a marked impact on the decoding performance, when other parameters are fixed. Clearly, as n increases, the decoding probability decreases greatly. Among different lines, we also show that the decoding probability is a function of the size of finite field q , when δ and p are fixed. As q increases, the performance improves greatly. Further, we observe that the decoding probability is a function of δ , when q and p are fixed in each line. We observe that for δ = 1, very poor performance is obtained, indeed, the channel failure probability, which makes message decoding fail, has a dominant effect. Clearly, as δ increases to 2, the performance improves greatly and as δ increases to 3, both of two successful probabilities are larger than 0.95 even in the presence of a small number q. Differences between analytical and simulative results are in the order of 10-3 in Fig. 2 and Fig. 3. This implies that simulation results match analytical results very well.

Fig. 3(a) shows the simulation results and the exact decoding probability at sink t obtained over 105 runs when ω = 5, p = 0.1 and rt = 3 . (b) shows the simulation and the exact decoding probability at all sinks obtained over 105 runs when ω = 5, p = 0.1, m = 10 and n = 5 .

We investigate the impact of the number of channel failure probability p on the decoding performance in Fig. 4. This figure shows the comparison of successful decoding probabilities with and without packet loss with ω = 5, rt = 3, m = 10, n = 5, p = 0.05, q = 8,16,32 and δ = 1,2,3,4 . From the above figure, we can observe that our result in Th. 3 and Th. 6 represented by blue lines and red lines, respectively, and for δ = 1 with channel failure probability p = 0.05, very poor performance is obtained. As δ increases to 2, the performance improves greatly. However, as δ increases from 3 to 4, the performance improves slowly. The importance of q and p is confirmed by Fig. 4. Numerical results further showed that, as the field size increases, the decoding performance improves significantly.

Fig. 4.(a) shows a comparison of successful decoding probabilities with and without packet loss at sink t as δ varies. (b) shows a comparison of successful decoding probabilities with and without packet loss at all sinks as δ varies.

 

4. Conclusion

In this paper, we mainly investigated RLNC for tree networks with and without packet loss by combining techniques from linear algebra, network flows and randomization. We derived closed-form decoding probability expressions of tree networks, as a function of the size of finite fields, the number of internal nodes, the number of sink nodes and the channel failure probability. For the purpose of simple computing , we also proposed a sharper bound on the required field size, when the failure probability is given.We investigated the impact of these major parameters on the decoding performance at one sink and at all sinks with simulation results. Furthermore, numerical results illustrated that the required field size can be minimized, when other parameters are fixed.

References

  1. L. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, "Network information flow," IEEE Trans. Inf. Theory, vol. 46, no.4, pp. 1204-1216, Jul. 2000. https://doi.org/10.1109/18.850663
  2. Y Zou, J Zhu, B Zheng, "A fully distributed opportunistic network coding scheme for cellular relay networks," Wireless Communications and Networking Conference (WCNC), pp. 2937-2942, 2013.
  3. S.-Y. R. Li, R. W. Yeung, and N. Cai, "Linear network coding," IEEE Trans. Inf. Theory, vol. 49, no. 2, pp. 371-381, Feb. 2003. https://doi.org/10.1109/TIT.2002.807285
  4. T. Ho, M. Medard, R. Koetter, D. Karger, M. Effros, J. Shi and B. Leong, "A random linear network coding approach to multicast," IEEE Trans. Inf. Theory, vol. 52, no. 10, pp. 4413-4430, Oct. 2006. https://doi.org/10.1109/TIT.2006.881746
  5. H. Balli, X. Yan, and Z. Zhang, "On randomized linear network codes and their error correction capabilities," IEEE Trans. Inf. Theory, vol. 55, no. 7, pp. 3148-3160, Jul. 2009. https://doi.org/10.1109/TIT.2009.2018173
  6. O. Trullols-Cruces, J.M. Barcelo-Ordinas, and M. Fiore, "Exact decoding probability under random linear network coding," IEEE Commun. Lett., vol. 15, no. 1, pp. 67-69, Jan. 2011. https://doi.org/10.1109/LCOMM.2010.110310.101480
  7. Xubo Zhao, "Notes on 'Exact Decoding Probability Under Random Linear Network Coding'," IEEE Commun. Lett., vol. 16, No. 5, May 2012.
  8. Carla-Fabiana Chiasserini, Emanuele Viterbo, and Claudio Casetti, "Decoding Probability in Random Linear Network Coding with Packet Losses," IEEE Commun. Lett., vol. 17, No. 11, November 2013.
  9. S. Bose, D. F. Gayme, S. Low and K. M. Chandy, "Optimal power flow over tree networks," in Proc. of 49th Annu. Allerton Conf. Communication, Control, and Computing, pp.1342-1348, 2011.
  10. A. J. Bhavnagarwala, Ashok Kapoor, and J. D. Meindl, "Generic models for interconnect delay across arbitrary wire-tree networks," Interconnect Technology Conference, pp. 129-131, 2000. http://dx.doi.org/10.1109/IITC.2000.854302
  11. X. Ma, Q.-Y. Huang, X.-M. Shu, and Q.-Y. Lu, "Design and implementation of one redundancy function in wireless tree networks," in Proc. of 2012 International Conference on MIC, IEEE, 2012.
  12. S.-H. Lee and S.-Y. Chung, "Capacity of a class of multicast tree networks," IEEE Trans. Inf. Theory, vol. 59, no. 6, pp. 3848-3857, 2013. https://doi.org/10.1109/TIT.2013.2253871
  13. Kohavi, Zvi and Israel Berger, "Fault Diagnosis in Combinational Tree Networks," IEEE Trans. Comp., pp.1161-1167, 1975.
  14. R. Koetter and M. Medard, "An algebraic approach to network coding," IEEE/ACM Trans. Netw., vol. 11, no. 5, pp. 782-795, Oct. 2003. https://doi.org/10.1109/TNET.2003.818197
  15. N. Cai, Network Coding Theory, Now Publishers Inc, 2006.
  16. J.H. van Lint and R. M. Wilson, A course in Combinatorics, 2nd edition, Cambridge university press, 2001.