Differential (Predictive) Coding
Vicente González Ruiz
January 1, 2020
Contents
1 Motivation
- Usually happens that the differences between consecutive PCM
(Pulse-Code Modulation) [2, 3] samples
tend to have a smaller variance (and therefore, smaller entropy) than the
original ()
ones
which potentially provides better lossless compression ratios for
than for .
- Finally, notice that by definition, if a source of “noise” (such as the
quantization noise) has not been introduced during the encoding process,
differential encoding is a fully reversible process:
2 Lab
- Compute
signal for the Jfk_berlin_address_high.ogg
signal and plot the probability density function (also known as the histogram)
of .
Note: use Python and Matplotlib, and insert the code here.
IPython notebook
3 DPCM (Differential PCM)
4 ADPCM (Adaptive DPCM)
- In orter to minimize ,
the predictor
can change its behaviour depending on the characteristics of .
5 Forward adaptive prediction
- Splits the input into blocks, optimizes the predictor for each block
and sends with each ADPCM block the predictor coefficients as side
information.
6 Optimal prediction based on Weiner-Hopf optimization
- The predictor can be any structure capable of producing a signal
- In the case of using LPC (Linear Predictive Coding), where
| (6) |
are the LPC
coefficients and
is the predictor order.
- Optimal foeffs
can be found when they minimize the variance of the residue (error
signal)
| (7) |
- For this, it must me hold that
| (8) |
- Appying expectations we get that
| (Weiner-Hopf equations) |
where is the
autorrelation function of ,
defined as
| (autocorrelation) |
7 Lab
For each ,
determine the optimal predictor for the audio Jfk_berlin_address_high.ogg,
using Weiner-Hopf optimization. Next, compute the variance of
for
each prediction order.
8 Backward adaptive prediction
- Both, the encoder and the decoder can constinuosly adapt the prediction to
the characteristics of ,
if both only use the information that is available at the decoder.
9 Adaptive prediction based on Least Mean Squared (LMS) algorithm
- As we know,
| (9) |
- Our objective is to minimize ,
which is the same that minimizing the energy of the prediction error
, by
controlling the
coefficients.
- Suppose that we can control iteratively these coefficients, by
incrementing or decrementing them with a proportionality constant
(the
larger ,
the faster the convergence but possiblely the oscillation, and viceversa). Let’s denote
the value
of coeff at
iteration
of the algorithm. If happens, for example, that
(the squared prediction error for iteration
has been increased compared to the previous iteration), then the prediction
should have been larger, and viceversa. Let’s define the iterative control of
coefficients as
| (LMS_idea) |
- Applying derivatives we obtain for
that
| (11) |
- Substituting this expression in Eq. (LMS_idea), we get that
| (LMS_control) |
- Notice that, in a backward adaptive prediction scheme, both elements
and
are known
at iteration
at both, the encoder and the decoder.
10 Lossy DPCM
References