• Title/Summary/Keyword: erasure probability

Search Result 15, Processing Time 0.027 seconds

A Family of Concatenated Network Codes for Improved Performance With Generations

  • Thibault, Jean-Pierre;Chan, Wai-Yip;Yousefi, Shahram
    • Journal of Communications and Networks
    • /
    • v.10 no.4
    • /
    • pp.384-395
    • /
    • 2008
  • Random network coding can be viewed as a single block code applied to all source packets. To manage the concomitant high coding complexity, source packets can be partitioned into generations; block coding is then performed on each set. To reach a better performance-complexity tradeoff, we propose a novel concatenated network code which mixes generations while retaining the desirable properties of generation-based coding. Focusing on the code's erasure performance, we show that the probability of successfully decoding a generation on erasure channels can increase substantially for any erasure rate. Using both analysis (for small networks) and simulations (for larger networks), we show how the code's parameters can be tuned to extract best performance. As a result, the probability of failing to decode a generation is reduced by nearly one order of magnitude.

Effect of Random Node Distribution on the Throughput in Infrastructure-Supported Erasure Networks (인프라구조 도움을 받는 소거 네트워크에서 용량에 대한 랜덤 노드 분포의 효과)

  • Shin, Won-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.5
    • /
    • pp.911-916
    • /
    • 2016
  • The nearest-neighbor multihop routing with/without infrastructure support is known to achieve the optimal capacity scaling in a large packet-erasure network in which multiple wireless nodes and relay stations are regularly placed and packets are erased with a certain probability. In this paper, a throughput scaling law is shown for an infrastructure-supported erasure network where wireless nodes are randomly distributed, which is a more feasible scenario. We use an exponential decay model to suitably model an erasure probability. To achieve high throughput in hybrid random erasure networks, the multihop routing via highway using the percolation theory is proposed and the corresponding throughput scaling is derived. As a main result, the proposed percolation highway based routing scheme achieves the same throughput scaling as the nearest-neighbor multihop case in hybrid regular erasure networks. That is, it is shown that no performance loss occurs even when nodes are randomly distributed.

Throughput Scaling Law of Hybrid Erasure Networks Based on Physical Model (물리적 모델 기반 혼합 소거 네트워크의 용량 스케일링 법칙)

  • Shin, Won-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.57-62
    • /
    • 2014
  • The benefits of infrastructure support are shown by analyzing a throughput scaling law of an erasure network in which multiple relay stations (RSs) are regularly placed. Based on suitably modeling erasure probabilities under the assumed network, we show our achievable network throughput in the hybrid erasure network. More specifically, we use two types of physical models, a exponential decay model and a polynomial decay model. Then, we analyze our achievable throughput using two existing schemes including multi-hop transmissions with and without help of RSs. Our result indicates that for both physical models, the derived throughput scaling law depends on the number of nodes and the number of RSs.

An adaptive fault tolerance strategy for cloud storage

  • Xiai, Yan;Dafang, Zhang;Jinmin, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5290-5304
    • /
    • 2016
  • With the growth of the massive amount of data, the failure probability of the cloud storage node is becoming more and more big. A single fault tolerance strategy, such as replication and erasure codes, has some unavoidable disadvantages, which can not meet the needs of the today's fault tolerance. Therefore, according to the file access frequency and size, an adaptive hybrid redundant fault tolerance strategy is proposed, which can dynamically change between the replication scheme and erasure codes scheme throughout the lifecycle. The experimental results show that the proposed scheme can not only save the storage space(reduced by 32% compared with replication), but also ensure the fast recovery of the node failures(increased by 42% compared with erasure codes).

A Disk-based Archival Storage System Using the EOS Erasure Coding Implementation for the ALICE Experiment at the CERN LHC

  • Ahn, Sang Un;Betev, Latchezar;Bonfillou, Eric;Han, Heejune;Kim, Jeongheon;Lee, Seung Hee;Panzer-Steindel, Bernd;Peters, Andreas-Joachim;Yoon, Heejun
    • Journal of Information Science Theory and Practice
    • /
    • v.10 no.spc
    • /
    • pp.56-65
    • /
    • 2022
  • Korea Institute of Science and Technology Information (KISTI) is a Worldwide LHC Computing Grid (WLCG) Tier-1 center mandated to preserve raw data produced from A Large Ion Collider Experiment (ALICE) experiment using the world's largest particle accelerator, the Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN). Physical medium used widely for long-term data preservation is tape, thanks to its reliability and least price per capacity compared to other media such as optical disk, hard disk, and solid-state disk. However, decreasing numbers of manufacturers for both tape drives and cartridges, and patent disputes among them escalated risk of market. As alternative to tape-based data preservation strategy, we proposed disk-only erasure-coded archival storage system, Custodial Disk Storage (CDS), powered by Exascale Open Storage (EOS), an open-source storage management software developed by CERN. CDS system consists of 18 high density Just-Bunch-Of-Disks (JBOD) enclosures attached to 9 servers through 12 Gbps Serial Attached SCSI (SAS) Host Bus Adapter (HBA) interfaces via multiple paths for redundancy and multiplexing. For data protection, we introduced Reed-Solomon (RS) (16, 4) Erasure Coding (EC) layout, where the number of data and parity blocks are 12 and 4 respectively, which gives the annual data loss probability equivalent to 5×10-14. In this paper, we discuss CDS system design based on JBOD products, performance limitations, and data protection strategy accommodating EOS EC implementation. We present CDS operations for ALICE experiment and long-term power consumption measurement.

Iterative Reliability-based Decoding of LDPC Codes with Low Complexity BEC Decoding (이진 소실 채널 복호를 이용한 신뢰기반 LDPC 반복 복호)

  • Kim, Sang-Hyo
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.14-15
    • /
    • 2008
  • In this paper, a new iterative decoding of LDPC codes is proposed. The decoding is based on the posteriori probability of each belief propagation (BP) decoding and an additional postprocessing, that is, erasure decoding of LDPC codes. It turned out that the new method consistently improves the decoding performance on various classes of LDPC codes. For example it removes the error floor of Margulis codes effectively.

  • PDF

Selected Mapping Technique Based on Erasure Decoding for PAPR Reduction of OFDM Signals (OFDM 신호의 PAPR 감소를 위한 소실 복호 기반의 SLM 기법)

  • Kong, Min-Han;Song, Moon-Kyou
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.2
    • /
    • pp.22-28
    • /
    • 2007
  • High PAPR (peak-to-average power ratio) is a major drawback of OFDM (orthogonal frequency division multiplexing) signals. In this paper, a modified SLM (selective mapping) technique that uses erasure decoding of RS (Reed-Solomon) codes is presented. At the transmitter a set of phase sequences are multiplied such that some portions of check symbols in RS-coded OFDM data blocks are phase-rotated. At the receiver, RS decoding is performed with the phase-rotated check symbols being treated as erasures. Hence, there is no need to send side information about the phase sequence selected to transmit for the lowest PAPR. In addition, the estimation process for the selected phase sequence is no longer needed at the receiver, leading to improvement in terms of complexity and performance. To evaluate the performance of this technique, the CCDF (complementary cumulative distribution function) of PAPR, the BER (bit error rate) and the decoding failure probability are compared with those of the previous SLM techniques.

Calculation Methods for Slot Utilization Based on Erasure nodes in DQDB Networks (소거노드 기반 DQDB망의 슬롯 이용률 평가식)

  • Cho, Kyoung-Sook;Oh, Bum-Suk;Kim, Chong-Gun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.10
    • /
    • pp.2654-2662
    • /
    • 1998
  • Maximum single bus throughput of standard IEEE 802.6 DQDB(Distributed Queue Dual Bus) is not obtained over 1. Therefore, lots of studies for improving bus throughput have done by QA slot preuse/reuse. We propose three calculation cethods for network's utilization with preuse/reuse scheme based on erasure nodes. One is calculation method by traffic density function, other is calculation method for obtaining maximum throughput. The other is calculation method using probability concept which follows real DQDB operation mechanisms. The calculated throughputs are compared with each others. The results shows some favorite phenomena. The proposed calculation methods can be casily expanded in mumber of nodes or in number of erasure nodes.

  • PDF

Jeju Jong Nang Channel Code I (제주 정낭 채널 Code I)

  • Lee, Moon Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.12
    • /
    • pp.27-35
    • /
    • 2012
  • In this paper, we look into a Jong Nang Channel which is the origin of digital communications and has been used in Jeju Island since AD 1234. It is one kind of communication ways which informs people of whether a house owner is in one's house or not using its own protocol. It is comprised of three timber and two stone pillars whose one side has three holes respectively. In this paper, we analyze the Jong-Nang Channel both in the light of logic and bit error probability. In addition, we compare it with a conventional binary erasure channel when some errors occur over them respectively. We also show that a capacity of NOR channel approaches Shannon limit.

Neural Equalization Techniques in Partial Erasure Model of Nonlinear Magnetic Recording Channel (부분 삭제 모델로 나타난 비선형 자기기록 채널에서의 신경망 등화기법)

  • Choi, Soo-Yong;Ong, Sung-Hwan;You, Cheol-Woo;Hong, Dae-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.103-108
    • /
    • 1998
  • The increase in the capacity of the digital magnetic recording systems inevitably causes severe intersymbol interference (ISI) and nonlinear distortions in the digital magnetic recording channel. In this paper, to cope with severe ISI and nonlinear distortions a neural decision feedback equalizer (NDFE) is applied to the digital magnetic recording channel - partial erasure channel model. In the performance comparison of bit error probability (or bit error ratio : BER) between the NDFE and the conventional decision feedback equalizer (DFE) via computer simulations. It has been found that as nonlinear distortions increase the NDFE has more SNR (SIgnal-to-Noise Ratio) advantage over the conventional DFE. In addition, in spite of the same recording density, as nonlinear distortions are increased, NDFE has the better performance of BER and the greater stability over conventional DFE.

  • PDF