Hexagonal image processing is a somewhat exotic part of imaging science. Digital images are typically considered as a number of pixels arranged in a square lattice. This is actually not predefined by nature, it is just the easiest way to describe two-dimensional data - which images are. Another way of representing such two-dimensional data is in the form of hexagonal lattices. There are actually scientists who have shown that hexagonal sampling is more efficient than square sampling. There are also scientists who investigated the performance of certain image processing algorithms on square and hexagonal lattices to prove that hexagonal sampling also enhances computational efficiency. What I found in most of these papers is that the authors typically don´t spend much time for the gathering the hexagonally sampled images. The biggest problem in hexagonal image processing currently is that there is no way to capture images on a hexagonal lattice, simply because there exists no such image sensor. So the only way to process hexagonal images is to create them artificially. In this article I will propose a method to create hexagonally sampled images.
Method
There are
rather simple ways to create hexagonally sampled images from square sampled
images. You could for example stretch the square sampled image using a simple
interpolation method and then ignore every other pixel so that you end up with
a virtual hexagonal grid. Other similar methods are used in the existing
literature. What is common to all of them is that they try to convert the
square sampled image into a hexagonally sampled image with approximately the
same pixel count. This is actually a process which will always introduce loss
of information and it is a very poor approximation of hexagonal sampling. What
these approaches ignore is the fact that hexagonal sampling not only requires a
hexagonal lattice, it also requires hexagonal shaped pixels. You can´t simply
take a square pixel, put it into a hexagonal grid and call it a hexagonal
pixel. So, what I propose is a resampling algorithm which imitates 'real'
hexagonal sampling. Real image sampling means that there is a natural scene
which reflects light into a camera´s lens. The lens may introduce some distortion
which limits the spatial resolution of the image. Then the light hits the
sensor surface to create a projection of the scene in front of the lens. This
projection is band limited (because of the lens) but still continuous. The sensor
is separated into pixels which will then do the sampling. This is the process
that needs to be imitated in resampling. Obviously, we don´t want to use a
natural scene, nor a lens or a physical sensor. So our starting point is the
projection of the scene on the sensor´s surface. Unfortunately we can´t imitate
a continuous image because this would have to have unlimited resolution. But as
an approximation we can use an image with very high resolution and then sample
it with rather low resolution. Still, this method is just an approximation, but
it is far better than putting square pixels on a hexagonal grid. The greater
the ratio between the input image and the output image the better the
approximation. Typically, for testing image processing algorithms it is
sufficient to use images with a few hundreds of pixels in each direction. By using
an input image taken with a modern camera you can easily achieve ratios of
about 10:1. If you are satisfied with smaller images or you need a better approximation,
you can still increase this figure.
Algorithm
My
algorithm takes an SquareImage and a ShrinkFactor as input parameters. The
SquareImage is of course the high-res source image, the shrink factor is the
ratio for subsampling. With a shrink factor of 10 the output image would be
roughly 10 times smaller than the input image (the pixel count will be about 1/100
of the input image). It is not exactly 10 times smaller because of the fact
that the aspect ratio of the hexagonal grid is different than for the square
grid. A hexagonal image will have about 7% more pixels on the horizontal axis
and 7% less on the vertical axis (note that this depends on the direction of
the hexagonal grid). If you would use an 2880x1620 input image and set the
ShrinkFactor to 10, then the output image would be 309x150. You might recognize
that also the area is not exactly 1/100. This is because the hexagonal grid´s
borders will never exactly match with the source images borders. In fact you
will lose a view pixels at the borders of the image.
The
algorithm crops hexagonal tiles out of the input image, calculates their mean
value and inserts this value into the output image at the correct position. In
order to crop these hexagonal tiles, I use a mask with a hexagonal shape with
an area of 625. A square with a width of 25 has the same area. This makes it
easy to create a square sampled image with the same pixel size to compare it
with the hexagonally sampled image. In order to achieve the intended
ShrinkFactor, I increase the image´s size using bicubic interpolation before
resampling.
The code is
designed with a lot of loops. This doesn´t look good and increases the time
required for the operation. However, it helps saving memory. When working with
big input images this can be an issue.
HexResampling.m is the Matlab source file. Hexagon25.png is the mask which is required to create the tiles of the image.
Below are some examples. The images on the left side are hexagonally sampled with my algorithm. The images on the right side are created by square sampling the same input image with the same pixel size.
I haven't read the whole thing, but this would be very helpful for my undergraduate research. God bless
ReplyDelete