Forward bug fixes
In telecommunications, information theory, and coding theory, forward error correction (in English, forward error correction or FEC) or channel codingis a technique used to control errors in data transmission over unreliable or noisy communication channels. The central idea is that the sender redundantly encodes the message, almost always using an Error Correction Code (ECC). The mechanism allows correction at the receiver without retransmission of the original information. It is used in returnless systems or real-time systems where you cannot wait for the retransmission to display the data. This error correction mechanism is used, for example, in satellite communications, in DVD and CD recorders or in DTT broadcasts for mobile terminals (DVB-H standard).
General principles
Error protection is one of several elements that make up the digital broadcasting process.
The technique consists of a series of modifications to the main signal (useful data) before being transmitted which, once it reaches the receiver altered by channel noise, are decoded and help to detect errors and, therefore, Therefore, to minimize the error rate per bit on a large scale. These modifications are carried out in a certain order and at different levels of depth. So that the receiver follows the reverse order to decode and correct errors up to the upper layers where, finally, we will have the original signal sent by the sender. Broadly speaking, the modifications consist of two concepts:
- High synchronism between transmitter and receiver.
- Redundancy at byte and bit levels.
Both help reduce receive Bit Error Rate (BER).
Sync
Receiver timing is an important factor in keeping bit error rates low. In binary (digital) broadcasting, data is sent according to whatever digital modulation is found to be most suitable (depending on desired bit rates, available bandwidth, etc.). Depending on this modulation, a certain number of bits are grouped into symbols and all the different possible combinations with this number of bits are mapped into what is called a constellation. Synchronism is nothing other than the fact that the receiver knows with high precision where each symbol begins and ends (at what instants of time). For each possible symbol, the sender sends a signal differentiable from the others during a time called symbol time. When, after a symbol time, the binary sequence that we have to send changes, we see a change in the transmitted signal called a transition. When the case occurs where the same symbol has to be transmitted consecutively, there are no transitions for a certain time. The lack of transitions impairs the synchronization of the receiver, therefore, what is often done in telecommunications is to force the transitions so that, during these periods where it is transmitted, the receiver can maintain synchronism.
A fairly simple example of a encoding that forces transitions is the Biphaseal Mark Code (BMC).
In the DVB digital television broadcasting standard, the error protection block (FEC) includes a system that forces transitions to maintain receiver synchronism: the scrambler.
Redundancy
Redundancy is the technique that, along with synchronism, is applied before transmissions to protect data against errors. This technique consists of adding information that already exists in the data packet (repeating it). The purpose of realizing this data redundancy is to average channel noise. If we send the data without redundancy, for each element we cannot know if the noise in reception will have a sufficiently low value so that the receiver can identify it properly. Knowing that the noise is Gaussian with mean zero, therefore its most probable values are close to zero, it is unlikely that this is high enough to harm the receiver when deciding the data sent.
It must be said, however, that in order to make the previous premises true, it is necessary to work with minimum SNR (signal to noise ratio) parameters. This means that the power of the transmitted signal has to be significantly higher than that of the channel noise, since otherwise this would impair data transmission not only when it had high values (unlikely), but also when it had medium values (very high). likely), making redundancy a useless technique. Assuming this condition, if we send each data unit more than once, the prevailing values will be almost univocally those that have actually been sent. Many times when the noise acquires a high value, it harms bursts of consecutive bits until it stabilizes, for which certain techniques are applied. To carry out the error detection-correction process through redundancy, some blocks are used in the transmitter called encoders and some blocks in the receiver called decoders.
Operation
The possibility of correcting errors is achieved by adding some redundancy bits to the original message. The digital source sends the data stream to the encoder, which is responsible for adding these redundancy bits. At the output of the encoder we obtain the so-called code word. This code word is sent to the receiver and this, through the appropriate decoder and applying the error correction algorithms, will obtain the original data sequence. The two main types of encoding used are:
- Codes block. Parity in the encoder is introduced by an algebraic algorithm applied to a bit block. The decoder applies the reverse algorithm to be able to identify and then correct the errors introduced in the transmission.
- Convolutionary codes. The bits are coded as they go to the encoder. It should be noted that the coding of one of the bits is greatly influenced by that of its predecessors. Decoding for this type of code is complex as in principle a large amount of memory is required to estimate the most likely data sequence for the received bits. It is currently used to decode this type of Viterbi algorithm codes, for its high efficiency in resource consumption.
Advantages
FEC reduces the number of error transmissions, as well as the power requirements of communication systems and increases their effectiveness by avoiding the need to resend messages damaged during reception.
Averaging noise to reduce errors
You could say that CCE works by "averaging the noise"; since each bit of data affects many transmitted symbols, corruption of some symbols by noise generally allows the original user data to be extracted from the other uncorrupted received symbols that also depend on the same user data.
- Because of this "risk meeting" effect, the digital communication systems that use CCE tend to function well above a certain minimum signal-to-noise ratio and nothing below it.
- This tendency of everything or nothing, called cliff effect, it becomes more pronounced as stronger codes are used that are closer to Shannon's theoretical limit.
- Intercaling coded CCE data can reduce the properties of all or nothing of CCE codes transmitted when channel errors tend to occur in bursts. However, this method has limits; it is best used in narrow band data.
Most telecommunication systems use a fixed channel code designed to tolerate the expected bit error rate in the worst case and then fail if the bit error rate gets worse and worse. However, some systems adapt to given channel error conditions: some hybrid autorepeat request instances use a fixed CCE method as long as the CCE can handle the error rate, then switch to ARQ when the error rate is too high; adaptive modulation and coding use a variety of CCE rates, adding more error correction bits per packet when there are higher error rates on the channel, or removing them when they are not needed.
Commitments
In general, including a greater number of redundancy bits implies a greater ability to correct errors. However, this fact significantly reduces the bit rate of transmission, and increases the delay in receiving the message.
Contenido relacionado
Microsoft Access
Mosaic
Bardot