Название: Digital Communications 1
Автор: Safwan El Assad
Издательство: John Wiley & Sons Limited
Жанр: Программы
isbn: 9781119779773
isbn:
If we choose the logarithm in base 2, λ becomes equal to unity and therefore h(xi) = h(xj) = log2(2) = 1 Shannon (Sh) or 1 bit of information, not to be confused with the digital bit (binary digit) which represents one of the binary digits 0 or 1.
Finally, we can then write:
[2.14]
It is sometimes convenient to work with logarithms in base e or with logarithms in base 10. In these cases, the units will be:
loge e = 1 natural unit = 1 nat (we choose 1 among e)
log10 10 = 1 decimal unit = dit (we choose 1 among 10)
Knowing that:
the relationships between the three units are:
– natural unit: 1 nat = log2(e) = 1/loge(2) = 1.44 bits of information;
– decimal unit: 1 dit= log2(10) = 1/log10(2) = 3.32 bits of information.
They are pseudo-units without dimension.
2.3.1. Entropy of a source
Let a stationary memoryless source S produce random independent events (symbols) s, belonging to a predetermined set [S] = [s1,s2, ... ,sN]. Each event (symbol) Si is of given probability pi, with:
The source S is then characterized by the set of probabilities [P] = [p1,p2, ... ,PN]. We are now interested in the average amount of information from this source of information, that is to say, resulting from the possible set of events (symbols) that it carries out, each is taken into account with its probability of occurrence. This average amount of information from the source S is called “entropy H(S) of the source”.
It is therefore defined by:
[2.15]
2.3.2. Fundamental lemma
Let two probability partitions on S:
we have the inequality:
[2.16]
Indeed, since: loge(x) ≤ x − 1, ∀x positive real, then:
2.3.3. Properties of entropy
– Positive: since 0 ≤ pi ≤ 1; (with the agreement
– Continuous: because it is a sum of continuous functions “log” of each pi.
– Symmetric: relative to all the variables pi.
– Upper bounded: entropy has a maximum value: got for a uniform law:
– Additive: let , then
[2.17]
2.3.4. Examples of entropy
2.3.4.1. Two-event entropy (Bernoulli’s law)
Figure 2.1. Entropy of a two-event source
The maximum of the entropy is obtained for
2.3.4.2. Entropy of an alphabetic source with (26 + 1) characters
– For a uniform law:⟹H = log2 (27) = 4.75 bits of information per character
– In the French language (according to a statistical study):⟹H = 3.98 bits of information per character
Thus, a text of 100 characters provides an information = 398 bits.
The inequality of the probabilities makes a loss of 475 – 398 = 77 bits of information.
2.4. Information rate and redundancy of a source
The information rate of a source is defined by:
[2.18]
Where:
The redundancy of a source is defined as follows:
[2.19]
2.5. Discrete channels and entropies
Between the source of information and the destination, there is the medium through which information is transmitted. This medium, including the equipment necessary for transmission, is called the transmission channel (or simply the channel).
Let us consider a discrete stationary and memoryless channel (discrete: the alphabet of the symbols at the input and the one at the output are discrete).
Figure 2.2. Basic transmission system based on a discrete channel. For a color version of this figure, see www.iste.co.uk/assad/digital1.zip
СКАЧАТЬ