The LOCO-I Standard (ISO/IEC 14495-1)

Vicente González Ruiz

January 1, 2020

Contents

1 Encoder
2 Decoder
3 The RLE mode
References

Intro

1 Encoder

  1. Initialization of the prediction contexts:
    1. Let Q the actual context (there are 1094 different spatial contexts).
    2. Let N[Q] the number of times that the context Q has been found.
    3. Let B[Q] the accumulated prediction error for the context Q.
    4. Let A[Q the sumatory of the absolute value of the prediction residues for the context Q.
    5. Let C[Q] the the bias cancellation values for the context Q. These values are added to the predictions to try that the predictions residues have a zero average. If this is not satisfied, the compression ratio in reduced severelly because the average of the real Lapace probability distribution of the residues and the modeled probability distribution does not match. For this reason, C[Q] is proportional to B[Q]N[Q] that added to the predictions, cancel the bias.
  2. Determination of the prediction context 𝒬:
    1. Compute the local gradient:

      g1 d a g2 a c g3 c b g4 b e

    2. Quantize the gradients:
      qi 0 if gi = 0 1 if 1 |gi| 2 2 if 3 |gi| 6 3 if 7 |gi| 14 4 otherwise

      for i = 1,2,3 and

      q4 0 if |g4| < 0 1 if 5 g4 2 otherwise.

  3. Compute the residue M(e):
    1. The initial prediction:
      ŝ min(a,b) if c max(a,b) max(a,b) if c min(a,b) a + b c otherwise.

    2. Bias cancellation:
      ŝ ŝ + C[Q] if g1 > 0 ŝ C[Q] otherwise.

    3. Compute the prediction error:
      e (s ŝ) mod β,

      where β is the number of bits/pixel. This produces a projection of the residues from the dynamic range [α + 1,α 1] to [α2,α2 1], where α = 2β is the size of the source alphabet.

    4. Shuffle the residues in order to get an exponiential (with negative exponent) distribution of the probability of the residues:
      M(e) 2e if e 0 2|e| 1 otherwise.

      After that, the residues are in the range [0,1,,2β 1]

  4. Variable length encoding of M(e) in the context Q:
    1. Output a Rice code for M(e) using the slope k = log2(A[Q]).
  5. Update the context Q:
    1. B[Q] B[Q] + e.
    2. A[Q] A[Q] + |e|.
    3. If N[Q] = RESET, then: (where 64 RESET 256)
      1. A[Q] A[Q]2.
      2. B[Q] B[Q]2.
      3. H[Q] N[Q]2.
    4. N[Q] N[Q] + 1.
    5. Update of C[Q]:
      1. If B[A] N[Q], then:
        1. B[Q] B[Q] + N[Q].
        2. If C[Q] > 128, then:
          • C[Q] C[Q] 1.
        3. If B[Q] N[Q], then:
          • B[Q] N[Q] + 1.
      2. Else:
        1. If B[Q] > 0, then:
          • B[Q] B[Q] N[Q].
          • If C[Q] < 127, then:
            • C[Q] C[Q] + 1.
          • If B[Q] > 0, then:
            • B[Q] 0.

2 Decoder

  1. Idem to Step 1 of the encoder.
  2. Idem to Step 2 of the encoder.
  3. Decode M(e).
  4. Compute the initial prediction such as in Step 3.a of the encoder.
  5. Add the bias to ŝ:
    ŝ ŝ C[Q] if g1 > 0 ŝ + C[Q] otherwise.

  6. Recover the original Laplace distribution for the residues:
    e M1(M(e)) = M(e)2 1 si M(e)is odd M(e)2 otherwise .

  7. s (e + ŝ) mod β.
  8. Update Q as in Step 5 of the encoder.

3 The RLE mode

LOCO-I uses a Rice encoder that can be very redundant if the probability distributions are very narrow. To overcome this drawback, there is a special encoding mode for this situation that is triggered when a = b = c = d.

The normal mode is re-started when if sa or the end of a line has been reached.

Let’s go to the lab!

References

[1]   Marcelo J Weinberger, Gadiel Seroussi, and Guillermo Sapiro. LOCO-I: A low complexity, context-based, lossless image compression algorithm. In Data Compression Conference, 1996. DCC’96. Proceedings, pages 140–149. IEEE, 1996.