# The PCIF algorithm: compression through polyominoes

In this step to compress a layer the algorithm groups all the ones in a given layer in sets that represent hv-convex polyominoes. These sets of ones are defined by the following properties:

• All the ones are connected (connectivity)
• All the columns of the polyomino represent a connected group of ones (v-convexity)
• All the rows of the polyomino represent a connected group of ones (h-convexity)

In figure, some types of polyominoes; only the rightmost is both horizontally and vertically convex and could be coded with the algorithm. Thanks to the properties above a convenient coding for these shapes has been defined. Grouping one values of a layer into polyominoes is good from the compressing point of view as we code together with a low number of bits values that are close in the layer. Since we always consider the bidimensional nature of data we group ones close to each other in both dimensions; we expect these values to be effectively near in areas of the image that are not smooth and have left high values after the filtering phase.

We will go down in details further; mainly, the coding throught polyominoes is done through the following steps:

1. The image is scanned from left to right and then from low to high, counting the number of values passed
2. When we find a one, we determine e polyomino that contains it, and we encode in the compressed file the polyomino and the counted number of zeros, representing the polyomino's relative distance from the previous one.
3. The count of the zeros is resetted, and the above steps are repeated until the end of the matrix.

In step one, values already coded are skipped. Further, also some values that we know to be zero as otherwise they would have been included in a polyomino are skipped. This gives a slightly better distribution of the distance values.

In figure, the encoding of a simple layer through polyominoes. In blue, ones that still have to be encoded and in cyan the known values that will not be counted in distances. Notice some additional points beyond encoded polyominoes are considered known values as if they contained a one they would have been encoded in a polyomino; this explaination though will not go down in details on this. In step two, when a one is found then the algorithm generates a family of polyominoes that contain it and choses the one that gives the best coding efficiency, i.e. codes lots of ones with few bits. To maintain the algorithm fast approximations are used, an the algorithm choses the poltomino that to given criteria is the closest to optimality.

At this point the algorithm writes a coding for the distance and the polyomino in the compressed file. Distances are coded with an Huffman tree with 64 values. If the distance exceeds the maximum value, than the greatest available value is written and the residual distance is coded with a variable-bit coding. The Huffman trees will have to be stored in the file together with the compressed data.

At this point every encountered polyomino is encoded with the following steps:

1. Polyominoes are coded in three steps. First of all two integers representng its dimension are written with Huffman codes. With its dimensions we intend the dimensions of the minimal rectangle with sides parallel to the x and y axis that contain it. This rectangle will also be refferred as bounding box.
2. If the width of the polyomino is greater than its length, the algorithm rotates it of 90 degrees clockwise . This saves some bits of storage as, for how the following encoding is defined, it will be convenient if the greater dimension of the polyomino is its height.
3. The second step of the coding of a polyomino is to write on the file the first of the lower and upper ones. This will be usefull in the following as these so called feet will give usefull information about values that we can determine automatically during the reconstruction. 4. In the third step of the coding of a polyomino its columns are encoded. Since every column is connected, it will be sufficient for the algorithm to write every column starts and where it ends, as distances from the upper and lower side of the bounding box (see example in figure). This takes advantage of the vertical convexity; while we encode the different columns we also have some information about the other ones thanks to the horizontal convexity. This data is used to encode the value of each column with a number contained in a limited range and allow this to be written with a minimal amount of bits.

The complete description of the coding of polyominoes is a bit more complex and would require much more space. If you are interested in details of the coding, you can consult the thesis about the pcif algorithm in the documentation section.

The evolution of the PCIF algorithm is now available ! It has a greater compression ratio, it is much faster and the implementation is available in both Java bytecode and native executables. Take a look at the new BCIF algorithm.