Compression schemes redux
Cees Jan Koomen
1/26/2009 12:01 AM EST
Can you put a one-hour HD
movie in 8 kilobytes of memory? Impossible.
But some people believe that was the case in what appeared
to be a revolutionary compression technology with a
compression factor of one million.
In the early 1990s, a Dutch TV engineer named Jan Sloot
invented and patented a new source-coding method. He was so
secretive about the inner working of his invention that he
alone--nobody else--really knew how it worked. Sloot was
able to maintain this high level of secrecy by carrying
around what appeared to be a memory device--apparently less
that a megabyte--that was never out of his control or sight.
regarded Sloot's technology as the miracle breakthrough that
would fundamentally change the landscape for media storage
and distribution. They scrambled to make an investment even
though Sloot refused to give them an in-depth look at the
technology. The most avid of these suitors were convinced of
the power of the technology, although others said it was a
But it did seem to work. Witnesses saw the inventor plug the
hardware device into another unit, after which video
playback was possible--despite the limited memory involved.
In 1999, Sloot died. With him to the grave went his
implementation secrets, although his patent did leave behind
a few clues. In an article in the Dutch magazine "De
Ingenieur" (The Engineer) of August 15, 2008, a well-written
analysis describes how his method might have worked. The
author explains that the method works with a reference table
and a memory of image elements, each consisting of a small
set of pixels. Each entry in the reference table refers to a
specific image element from the memory.
A TV image is then encoded by its corresponding reference
table with a set of numbers referring to a set of basic
image elements in that memory. The original image can be
reconstructed from the combination of reference table and
the corresponding image elements.
The reference table can be kept very small in kilobytes.
It's been speculated that some people confused that table as
a measure for the compression ratio. If that were true, then
you end up with a compression factor of a million. The
article argues that in reality the compression method is
comparable to an approach such as MPEG2, although there are
differences in the way the compression works.
But how much compression is really possible and how much
memory is really necessary?
To answer that question, we revert to an information theory
that poses questions such as "under what conditions can
error-free communication take place?", and "what happens in
less than ideal conditions such as noise on a communication
channel?" A well-known information theory law is the
Nyquist-Shannon sampling theorem which says if you have an
analog signal (for example a TV signal) with a highest
frequency of "f," then you have to sample that signal with
at least two times this highest frequency (i.e. "2f") to
guarantee faithful reproduction of the original signal.
Take, for example, a PAL video signal with a resolution of
576 x720 pixels, or a total of 414,720 pixels per image. An
image is refreshed twenty-four times a second, giving a rate
of almost ten million pixels per second. With how many bits
do we need to encode a pixel? The eye can distinguish
between a billion levels of light and can discern
approximately 120 levels of color. Encoding these levels
translates into 23 and 7 bits respectively, a total of 30
bits per pixel. I have ignored the fact that the color
sensitivity of the eye is dependent on the quantity of light.
In our PAL example, the bottom line is some 300 million bits
per second. In practice, however, we use much less.
Suppose we limit color and light sensitivity to 15 bits,
scaling back to about 150 million bits per second. Hence,
for a one-hour movie, the total bill is a whopping 135
Gigabytes of memory. Obviously, we still want a smaller
Any image contains redundancies. Therefore, we can manage
with less information by encoding these redundancies in a
So what would a compression factor of a million mean? For
comparison, at a compression factor of 50 (H264), the
resolution would be equivalent to some 45,000 HD pixels.
Applying a compression ratio of a million to a 1920 x 1180
pixel HD image yields a resolution corresponding to only one
or two HD pixels. In other words: no picture at all but only
At that compression ratio a movie makes no sense anymore.
Unless you believe it still should work. But maybe that kind
of impressive compassion about compression is just as
entertaining as a movie; that is, if you can still see it.
Cees Jan Koomen, a
former Philips executive, is a Netherlands-based
2/3/2009 2:33 PM
I think the author neglects the massive
potential compression across time due to
slowly-changing images, but regardless,
compression of a million is possible in
theory. Imagine the complexity in the human
body which originates with a mere 12 billion
bits in the DNA, the bulk of which are
unimportant. A sufficient decompression
engine could extrapolate the entire
structure of any adult from the DNA alone.
Or imagine this scene: "Ingrid Bergman leans
against a maple tree trunk, one eye peering
from beneath a wide-brimmed hat, waiting
impatiently for her visitor." The
decompression engine of my mind took those
few bytes of information and rendered them
into an image which might take megabytes to
store on a DVD. There is no reason a CPU
could not do the same, given a sufficient
image library. In this case, almost any
maple tree image suffices without a loss of