Название: Multi-Processor System-on-Chip 2
Автор: Liliana Andrade
Издательство: John Wiley & Sons Limited
Жанр: Зарубежная компьютерная литература
isbn: 9781119818403
isbn:
2.3. Towards 1 Tbit/s throughput decoders
A large parallelism is a must for high-throughput decoders towards 1 Tbit/s. The achievable parallelism strongly depends on the properties of the decoding algorithms, for example, sub-functions of a decoding algorithm that have no mutual data dependencies can easily be parallelized by spatial parallelism. This is the case for the Belief Propagation (BP) algorithm to decode LDPC codes. In the BP algorithm, all check nodes can be processed independently from each other. The same applies for the variable nodes. The situation is different for the MAP algorithm used in Turbo code decoding, where the calculation of a specific trellis step depends on the result of the previous trellis step. This results in a sequential behavior, and the different trellis steps cannot be calculated in parallel. Such data dependencies exist also in iterative decoding algorithms between the various iterations. Hence, the bottleneck for high throughput is the part of an algorithm that cannot be parallelized. This part strongly limits the overall throughput that is also known as Amdahl’s law (Amdahl 2013). Other important features for efficient high-throughput implementations are locality and regularity to minimize interconnect. Interconnect can largely contribute to area, delay and energy consumption. Let us assume an FEC IP block of size 10 mm2 is a feasible size. A signal has to travel at least 7 mm if it has to be transmitted from one corner to the other in this IP block. It will take three clock cycles in a 14 nm technology, assuming a frequency of 1 GHz. This interconnect delay largely decreases the throughput and increases power. Thus, data transfers can be as important as calculations. Although channel decoding algorithms for advanced codes such as Turbo- and LDPC codes are mainly data-flow dominated, they imply irregularity and restricted locality, since efficient channel coding for these coding schemes is grounded on some randomness: that is the interleaver for Turbo codes and the Tanner graph for LDPC codes, respectively. Any regularity and locality in these structures improves the implementation efficiency but has negative impact on the communications performance. Hence, there is a fundamental discrepancy between information theory and efficient implementations for high throughput that demands regularity and locality. In Table 2.1, important implementation properties for the different code classes are summarized.
Table 2.1. Implementation properties of various coding schemes
Code | Dec. Algorithms | Parallel vs. Serial | Locality | Compute Kernels | Transfers vs. Compute |
Turbo | MAP | Serial/iterative | Low (interleaver) | Add-Compare-Select | Compute dominated |
LDPC | Belief propagation | Parallel/iterative | Low (Tanner graph) | Min-Sum/add | Transfer dominated |
Polar | Successive cancelation/list | Serial | High | Min-Sum/add/sorting | Balanced |
Functional parallelism/pipelining is an efficient technique to speed up algorithms with data dependencies. We can “unroll” the iterations and insert buffers between the different pipeline stages in iterative decoding algorithms. In this way, the various iterations can be calculated in parallel, but on different data sets. Spatial and functional parallelization are implementation techniques only, i.e. they are not changing the algorithmic behavior. We can also modify the decoding algorithm itself to enable parallelism. Let us again consider the MAP algorithm. The data dependencies in the forward/backward recursion can be broken up by exploiting the fact that the trellis has a finite memory due to the underlying constraint length of the code. This property enables the splitting of the trellis into sub-trellises that can be processed independently of each other. However, some acquisition steps are mandatory at the border of the sub-trellises to get the correct probabilities. The length of the sub-trellises and the corresponding acquisition length impact the communications performance. In this case, an increased parallelism can have a negative impact on the communications performance.
The throughput of channel decoders depends on various communication and implementation parameters. Let N be the block size and R the rate of a channel code. Let I denote the number of iterations that a corresponding (iterative) decoder requires to decode a code block (I = 1 in the case of a non-iterative decoding algorithm). Let P denote the degree of achievable parallelism. P is defined as the ratio between the operations that are performed in one clock cycle, and the total number of operations necessary to perform one decoding iteration for a complete code block. Note that operation represents the “critical operation” in an algorithm: an operation can be a computation as well as a data transfer. Let f be the clock frequency. Using these parameters, the throughput (in information bits per second) of a FEC architecture can be roughly estimated by [2.1]
where ω is a normalized value between 0 and 1, that represents the timing overhead due to, for example, data distribution, interconnect, memory access conflicts, etc. The maximum clock frequency f is determined by the critical path in the compute kernels of the corresponding decoding algorithms and/or delay due to interconnect. Moreover, f is typically upper limited to 1 GHz due to power and design methodology constraints. The overhead ω increases with increasing N and P and is larger for decoding algorithms that have limited locality and are data-transfer dominated. The impact of ω on the throughput can be considered as an effective reduction of the clock frequency f or decrease in P, if additional clock cycles are mandatory, for example, due to memory conflicts, that cannot be hidden. If we are targeting 1 Tbit/s throughput with a frequency limit of 1 GHz, 1,000 bits have to be decoded in one single clock cycle that requires extreme parallelism. To achieve the highest throughput, P has to be maximized, and ω minimized. In the following sections, we will discuss the throughput maximization for the different coding schemes.
2.3.1. Turbo decoder
Among the three code classes, Turbo codes are most challenging with respect to high-throughput decoding. The decoding structure consists of two component decoders connected through an interleaver/deinterleaver. These component decoders work cooperatively by exchanging extrinsic information in an iterative loop. The respective bit-wise log-likelihood ratios are computed in a forward and a backward recursion on the trellis in the MAP-based component decoder. Thus, decoding is inherently СКАЧАТЬ