ANALYSIS METHODS IMAGE SEGMENTATION |
Автор: F.K.Achilova, G.Z.Toyirova |
14.04.2018 13:44 |
ANALYSIS
METHODS IMAGE SEGMENTATION F.K.Achilova, G.Z.Toyirova Karshi Branch
TIUT, Uzbekistan INTRODUCTION In computer vision, segmentation is the process of dividing a digital
image into several segments. The purpose of segmentation is to simplify or change
the presentation of the image, so that it is easier and easier to analyze.
Segmentation of images is usually used to select objects and borders on images.
More precisely, segmentation of images is the process of assigning such labels
to each pixel of an image, that pixels with the same labels have common visual
characteristics. The result of segmentation of an image is a set of segments that
together cover the whole image, or a set of contours selected from the image.
All pixels in the segment are similar in some characteristics or calculated
properties, for example, by color, brightness, or texture. Neighboring segments
differ significantly in this characteristic. Methods based on clustering k-means is an iterative method that is used to divide an image into K
clusters. The basic algorithm is given below: Select K cluster centers, randomly or based on some heuristics; Place each pixel in the image in the cluster whose center is closest to
this pixel; Recalculate the cluster centers, averaging all the pixels in the
cluster; Repeat steps 2 and 3 until convergence (for example, when the pixels
remain in the same cluster). Here, as the distance, the sum of the squares or absolute values of
the differences between the pixel and the center of the cluster is usually
taken. The difference is usually based on the color, brightness, texture and
location of the pixel, or on the weighted sum of these factors. K can be
selected manually, accidentally or heuristically. This algorithm is guaranteed to converge, but it may not lead to an
optimal solution. The quality of the solution depends on the initial set of
clusters and the value of K. Methods using the histogram The methods using the histogram are very effective when compared with
other image segmentation methods, because they require only one pass through
the pixels. In this method, a histogram is calculated over all pixels in the
image and its minima and maxima are used to find clusters in the image. Color
or brightness can be used when comparing. The improvement of this method is to apply it recursively to clusters in
the image in order to divide them into smaller clusters. The process is
repeated with smaller and smaller clusters until new clusters stop appearing. One drawback of this method is that it may be difficult for him to find
significant minima and maxima on the image. This method of classifying images
is similar to the distance metric and the comparison of integrated regions. Approaches based on the use of histograms can also be quickly adapted
for several frames, while retaining their speed advantage through one pass. A
histogram can be constructed in several ways, when several frames are
considered. The same approach that is used for one frame can be applied to
several, and after the results are combined, the minima and maxima that were
difficult to isolate become more noticeable. A histogram can also be applied to
each pixel, where information is used to determine the most frequent color for
a given pixel position. This approach uses segmentation based on moving objects
and motionless surroundings, which gives another kind of segmentation useful in
video snapshot. Graph cutting methods The methods of cutting a graph can be effectively applied to image
segmentation. In these methods, the image is represented as a weighted
undirected graph. Typically, a pixel or a group of pixels is associated with a
vertex, and edge weights determine (not) the similarity of neighboring pixels.
Then the graph (image) is cut according to the criterion created for obtaining
"good" clusters. Each part of the vertices (pixels) obtained by these
algorithms is considered an object in the image. Some popular algorithms of
this category are normalized sections of graphs, random walk, minimal cut,
isoperimetric separation and segmentation using a minimal spanning tree. Watershed segmentation In segmentation by the watershed method, the absolute magnitude of the
image gradient as a topographic surface is considered. Pixels having the
greatest absolute magnitude of the luminance gradient correspond to the
watershed lines that represent the boundaries of the regions. Water, placed on
any pixel inside the common watershed line, flows down to a common local
minimum brightness. Pixels, from which water flows to a common minimum, form a
catchment area that represents the segment. Segmentation using the model The basic assumption of this approach is that the structures or organs
of interest have repetitive geometric shapes. Consequently, one can find a
probabilistic model for explaining the changes in the shape of the organ and
then, by segmenting the image, impose limitations using this model as an a priori
one. Such a task includes (i) bringing the training examples to a common
posture, (ii) a probabilistic representation of the changes in the resulted
samples, and (iii) a statistical output for the model and the image. Modern
methods in the literature for knowledge-based segmentation contain active
models of form and appearance, active contours, deformable patterns and methods
for establishing a level. Multi-scale segmentation Segmentation of images is performed at different scales in a large-scale
space and sometimes extends from small scales to large ones. The segmentation criterion can be arbitrarily complex and can take into
account both local and global criteria. The general requirement is that each
area should be related in some sense. Example segmentation: rgb=imread('pears.png'); I=rgb2gray(rgb); imshow(I) text(732,501,'…','FontSize',7,'HorizontalAlignment','right') hy=fspecial('sobel'); hx=hy'; Iy=imfilter(double(I), hy, 'replicate'); Ix=imfilter(double(I), hx, 'replicate'); gradmag=sqrt(Ix.^2+Iy.^2); figure, imshow(gradmag,[]), title(' ') L=watershed(gradmag); Lrgb=label2rgb(L); figure, imshow(Lrgb), title('Lrgb') se=strel('disk', 20); Io=imopen(I, se); figure, imshow(Io), title('Io') Ie=imerode(I, se); Iobr=imreconstruct(Ie, I); figure, imshow(Iobr), title('Iobr') Ioc=imclose(Io, se); figure, imshow(Ioc), title('Ioc') Iobrd=imdilate(Iobr, se); Iobrcbr=imreconstruct(imcomplement(Iobrd),
imcomplement(Iobr)); Iobrcbr=imcomplement(Iobrcbr); figure, imshow(Iobrcbr), title('Iobrcbr') fgm=imregionalmax(Iobrcbr); figure, imshow(fgm), title('fgm') I2=I; I2(fgm)=255; figure, imshow(I2), title('fgm') se2=strel(ones(5, 5)); fgm2=imclose(fgm, se2); fgm3=imerode(fgm2, se2); fgm4=bwareaopen(fgm3, 20); I3=I; I3(fgm4)=255; figure, imshow(I3) title('fgm4') bw=im2bw(Iobrcbr,
graythresh(Iobrcbr)); figure,
imshow(bw), title('bw') D=bwdist(bw);
DL=watershed(D); bgm=DL==0; figure,
imshow(bgm), title('bgm')
gradmag2=imimposemin(gradmag, bgm | fgm4);
L=watershed(gradmag2); I4=I;
I4(imdilate(L==0, ones(3, 3))|bgm|fgm4)=255; figure,
imshow(I4) title(' ')
Lrgb=label2rgb(L, 'jet', 'w', 'shuffle'); figure,
imshow(Lrgb) title('Lrgb') figure,
imshow(I), hold on
himage=imshow(Lrgb); set(himage, 'AlphaData', 0.3); title('Lrgb') Conclusion As a rule, algorithms for segmentation of monochrome images are based on
one of two basic properties of image brightness: discontinuity and homogeneity.
In the first case, the approach consists in splitting the image into parts
based on sharp changes in the brightness values that occur, for example, on the
boundaries of objects. The second group of methods separates the images into
regions that are homogeneous in the sense of certain pre-defined curves. Literature 1. У.
Прэтт. Цифровая обработка изображений. В 2-книгах.- М.: Мир. 1982. 2. Цифровое преобразование изображений: Учеб. Пособие. -М.: Горячая линия
-Телеком, 2003.-229 c. 3. Миано Дж. Форматы
и алгоритмы сжатия изображений в действии: Учеб. пособие. -М.: ТРИУМФ, 2003.-336
c. |