• Title/Summary/Keyword: Error Check Algorithm

Search Result 153, Processing Time 0.048 seconds

Error Check Algorithm in the Wireless Transmission of Digital Data by Water Level Measurement

  • Kim, Hie-Sik;Seol, Dae-Yeon;Kim, Young-Il;Nam, Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1666-1668
    • /
    • 2004
  • By wireless transmission data, there is high possibility to get distortion and lose by noise and barrier on wireless. If the data check damaged and lost at receiver, can't make it clear and can't judge whether this data is right or not. Therefore, by wireless transmission data need the data error check algorithm in order to decrease the data's distortion and lose and to monitoring the transmission data as real time. This study consists of RF station for wireless transmission, Water Level Meter station for water level measurement and Error check algorithm for error check of transmission data. This study is also that investigation and search for error check algorithm in order to wireless digital data transmission in condition of the least data's damage and lose. Designed transmitter and receiver with one - chip micro process to protect to swell the volume of circuit. Had designed RF transmitter - receiver station simply by means of ATMEL one - chip micro process in the systems. Used 10mW of the best RF power and 448MHz-449MHz on frequency band which can get permission to use by Frequency Law made by Korean government

  • PDF

LDPC Decoding by Failed Check Nodes for Serial Concatenated Code

  • Yu, Seog Kun;Joo, Eon Kyeong
    • ETRI Journal
    • /
    • v.37 no.1
    • /
    • pp.54-60
    • /
    • 2015
  • The use of serial concatenated codes is an effective technique for alleviating the error floor phenomenon of low-density parity-check (LDPC) codes. An enhanced sum-product algorithm (SPA) for LDPC codes, which is suitable for serial concatenated codes, is proposed in this paper. The proposed algorithm minimizes the number of errors by using the failed check nodes (FCNs) in LDPC decoding. Hence, the error-correcting capability of the serial concatenated code can be improved. The number of FCNs is simply obtained by the syndrome test, which is performed during the SPA. Hence, the decoding procedure of the proposed algorithm is similar to that of the conventional algorithm. The error performance of the proposed algorithm is analyzed and compared with that of the conventional algorithm. As a result, a gain of 1.4 dB can be obtained by the proposed algorithm at a bit error rate of $10^{-8}$. In addition, the error performance of the proposed algorithm with just 30 iterations is shown to be superior to that of the conventional algorithm with 100 iterations.

The Development of the Data Error Inspection Algorithm for the Remote Sensing by Wireless Communication (원격계측을 위한 무선 통신 에러 검사 알고리즘 개발)

  • 김희식;김영일;설대연;남철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.993-997
    • /
    • 2004
  • A data error inspection algorithm for wireless digital data communication was developed. Original data converted By wireless digital data error inspection algorithm. Wireless digital data is high possibility to get distortion and lose by noise and barrier on wireless. If the data check damaged and lost at receiver, can't make it clear and can't judge whether this data is right or not. Therefore, by wireless transmission data need the data error inspection algorithm in order to decrease the data distortion and lose and to monitoring the transmission data as real time. This study consists of RF station for wireless transmission, Water Level Meter station for water level measurement and Error inspection algorithm for error check of transmission data. This study is also that investigation and search for error inspection algorithm in order to wireless digital data transmission in condition of the least data's damage and lose. Designed transmitter and receiver with one - chip micro process to protect to swell the volume of circuit. Had designed RF transmitter - receiver station simply by means of ATMEL one - chip micro processing the systems. Used 10mW of the best RF power and 448MHz-449MHz on frequency band which is open to public touse free within the limited power.

  • PDF

Selection-based Low-cost Check Node Operation for Extended Min-Sum Algorithm

  • Park, Kyeongbin;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.2
    • /
    • pp.485-499
    • /
    • 2021
  • Although non-binary low-density parity-check (NB-LDPC) codes have better error-correction capability than that of binary LDPC codes, their decoding complexity is significantly higher. Therefore, it is crucial to reduce the decoding complexity of NB-LDPC while maintaining their error-correction capability to adopt them for various applications. The extended min-sum (EMS) algorithm is widely used for decoding NB-LDPC codes, and it reduces the complexity of check node (CN) operations via message truncation. Herein, we propose a low-cost CN processing method to reduce the complexity of CN operations, which take most of the decoding time. Unlike existing studies on low complexity CN operations, the proposed method employs quick selection algorithm, thereby reducing the hardware complexity and CN operation time. The experimental results show that the proposed selection-based CN operation is more than three times faster and achieves better error-correction performance than the conventional EMS algorithm.

Development and Assessment of Real-Time Quality Control Algorithm for PM10 Data Observed by Continuous Ambient Particulate Monitor (부유분진측정기(PM10) 관측 자료 실시간 품질관리 알고리즘 개발 및 평가)

  • Kim, Sunyoung;Lee, Hee Choon;Ryoo, Sang-Boom
    • Atmosphere
    • /
    • v.26 no.4
    • /
    • pp.541-551
    • /
    • 2016
  • A real-time quality control algorithm for $PM_{10}$ concentration measured by Continuous Ambient Particulate Monitor (FH62C14, Thermo Fisher Scientific Inc.) has been developed. The quality control algorithm for $PM_{10}$ data consists of five main procedures. The first step is valid value check. The values should be within the acceptable range limit. Upper ($5,000{\mu}g\;m^{-3}$) and lower ($0{\mu}g\;m^{-3}$) values of instrument detectable limit have to be eliminated as being unrealistic. The second step is valid error check. Whenever unusual condition occurs, the instrument will save error code. Value having an error code is eliminated. The third step is persistence check. This step checks on a minimum required variability of data during a certain period. If the $PM_{10}$ data do not vary over the past 60 minutes by more than the specific limit ($0{\mu}g\;m^{-3}$) then the current 5-minute value fails the check. The fourth step is time continuity check, which is checked to eliminate gross outlier. The last step is spike check. The spikes in the time series are checked. The outlier detection is based on the double-difference time series, using the median. Flags indicating normal and abnormal are added to the raw data after quality control procedure. The quality control algorithm is applied to $PM_{10}$ data for Asian dust and non-Asian dust case at Seoul site and dataset for the period 2013~2014 at 26 sites in Korea.

Simplified 2-Dimensional Scaled Min-Sum Algorithm for LDPC Decoder

  • Cho, Keol;Lee, Wang-Heon;Chung, Ki-Seok
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.3
    • /
    • pp.1262-1270
    • /
    • 2017
  • Among various decoding algorithms of low-density parity-check (LDPC) codes, the min-sum (MS) algorithm and its modified algorithms are widely adopted because of their computational simplicity compared to the sum-product (SP) algorithm with slight loss of decoding performance. In the MS algorithm, the magnitude of the output message from a check node (CN) processing unit is decided by either the smallest or the next smallest input message which are denoted as min1 and min2, respectively. It has been shown that multiplying a scaling factor to the output of CN message will improve the decoding performance. Further, Zhong et al. have shown that multiplying different scaling factors (called a 2-dimensional scaling) to min1 and min2 much increases the performance of the LDPC decoder. In this paper, the simplified 2-dimensional scaled (S2DS) MS algorithm is proposed. In the proposed algorithm, we figure out a pair of the most efficient scaling factors which multiplications can be replaced with combinations of addition and shift operations. Furthermore, one scaling operation is approximated by the difference between min1 and min2. The simulation results show that S2DS achieves the error correcting performance which is close to or outperforms the SP algorithm regardless of coding rates, and its computational complexity is the lowest comparing to modified versions of MS algorithms.

A new syndrome check error estimation algorithm and its concatenated coding for wireless communication

  • 이문호;장진수;최승배
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1419-1426
    • /
    • 1997
  • A new SCEE(Syndrome Check Error Estimation) decoding method for convolutional code and concatenated SCEE/RS (Reed-Solomon) conding scheme are proposed. First, we describe the operation of the decoding steps in the proposed algorithm. Then deterministic values on the decoding operation are drived when some combination of predecoder-reencoder is used. Computer simulation results show that the compuatational complexity of the proposed SCEE decoder is significantly reduced compared to that of conventional Viterbi-decoder without degratation of the $P_{e}$ performance. Also, the concatenated SCEE/RS decoder has almost the same complexity of a RS decoder and its coding gain is higher than that of soft decision Viterbi or RS decoder respectively.

  • PDF

An Improvement of UMP-BP Decoding Algorithm Using the Minimum Mean Square Error Linear Estimator

  • Kim, Nam-Shik;Kim, Jae-Bum;Park, Hyun-Cheol;Suh, Seung-Bum
    • ETRI Journal
    • /
    • v.26 no.5
    • /
    • pp.432-436
    • /
    • 2004
  • In this paper, we propose the modified uniformly most powerful (UMP) belief-propagation (BP)-based decoding algorithm which utilizes multiplicative and additive factors to diminish the errors introduced by the approximation of the soft values given by a previously proposed UMP BP-based algorithm. This modified UMP BP-based algorithm shows better performance than that of the normalized UMP BP-based algorithm, i.e., it has an error performance closer to BP than that of the normalized UMP BP-based algorithm on the additive white Gaussian noise channel for low density parity check codes. Also, this algorithm has the same complexity in its implementation as the normalized UMP BP-based algorithm.

  • PDF

Self-Adaptive Termination Check of Min-Sum Algorithm for LDPC Decoders Using the First Two Minima

  • Cho, Keol;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.4
    • /
    • pp.1987-2001
    • /
    • 2017
  • Low-density parity-check (LDPC) codes have attracted a great attention because of their excellent error correction capability with reasonably low decoding complexity. Among decoding algorithms for LDPC codes, the min-sum (MS) algorithm and its modified versions have been widely adopted due to their high efficiency in hardware implementation. In this paper, a self-adaptive MS algorithm using the difference of the first two minima is proposed for faster decoding speed and lower power consumption. Finding the first two minima is an important operation when MS-based LDPC decoders are implemented in hardware, and the found minima are often compressed using the difference of the two values to reduce interconnection complexity and memory usage. It is found that, when these difference values are bounded, decoding is not successfully terminated. Thus, the proposed method dynamically decides whether the termination-checking step will be carried out based on the difference in the two found minima. The simulation results show that the decoding speed is improved by 7%, and the power consumption is reduced by 16.34% by skipping unnecessary steps in the unsuccessful iteration without any loss in error correction performance. In addition, the synthesis results show that the hardware overhead for the proposed method is negligible.

Improvement and Evaluation of Automatic Quality Check Algorithm for Particulate Matter (PM10) by Analysis of Instrument Status Code (부유분진(PM10) 측정기 상태 코드 분석을 통한 자동 품질검사 알고리즘 개선 및 평가)

  • Kim, Mi-Gyeong;Park, Young-San;Ryoo, Sang-Boom;Cho, Jeong Hoon
    • Atmosphere
    • /
    • v.29 no.4
    • /
    • pp.501-509
    • /
    • 2019
  • Asian Dust is a meteorological phenomenon that sand particles are raised from the arid and semi-arid regions-Taklamakan Desert, Gobi Desert and Inner Mongolia in China-and transported by westerlies and deposited on the surface. Asian dust results in a negative effect on human health as well as environmental, social and economic aspects. For monitoring of Asian Dust, Korea Meteorological Administration operates 29 stations using a continuous ambient particulate monitor. Kim et al. (2016) developed an automatic quality check (AQC) algorithm for objective and systematic quality check of observed PM10 concentration and evaluated AQC with results of a manual quality check (MQC). The results showed the AQC algorithm could detect abnormal observations efficiently but it also presented a large number of false alarms which result from valid error check. To complement the deficiency of AQC and to develop an AQC system which can be applied in real-time, AQC has been modulated. Based on the analysis of instrument status codes, valid error check process was revised and 6 status codes were further considered as normal. Also, time continuity check and spike check were modified so that posterior data was not referred at inspection time. Two-year observed PM10 concentration data and corresponding MQC results were used to evaluate the modulated AQC compared to the original AQC algorithm. The results showed a false alarm ratio decreased from 0.44 to 0.09 and the accuracy and the probability of detection were conserved well in spite of the exclusion of posterior data at inspection time.