Skip to main content

Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin

Abstract

Abstract

The comparative study of the results of various segmentation methods for the digital images of the follicular lymphoma cancer tissue section is described in this paper. The sensitivity and specificity and some other parameters of the following adaptive threshold methods of segmentation: the Niblack method, the Sauvola method, the White method, the Bernsen method, the Yasuda method and the Palumbo method, are calculated. Methods are applied to three types of images constructed by extraction of the brown colour information from the artificial images synthesized based on counterpart experimentally captured images. This paper presents usefulness of the microscopic image synthesis method in evaluation as well as comparison of the image processing results. The results of thoughtful analysis of broad range of adaptive threshold methods applied to: (1) the blue channel of RGB, (2) the brown colour extracted by deconvolution and (3) the ’brown component’ extracted from RGB allows to select some pairs: method and type of image for which this method is most efficient considering various criteria e.g. accuracy and precision in area detection or accuracy in number of objects detection and so on. The comparison shows that the White, the Bernsen and the Sauvola methods results are better than the results of the rest of the methods for all types of monochromatic images. All three methods segments the immunopositive nuclei with the mean accuracy of 0.9952, 0.9942 and 0.9944 respectively, when treated totally. However the best results are achieved for monochromatic image in which intensity shows brown colour map constructed by colour deconvolution algorithm. The specificity in the cases of the Bernsen and the White methods is 1 and sensitivities are: 0.74 for White and 0.91 for Bernsen methods while the Sauvola method achieves sensitivity value of 0.74 and the specificity value of 0.99. According to Bland-Altman plot the Sauvola method selected objects are segmented without undercutting the area for true positive objects but with extra false positive objects. The Sauvola and the Bernsen methods gives complementary results what will be exploited when the new method of virtual tissue slides segmentation be develop.

Virtual Slides

The virtual slides for this article can be found here: slide 1: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617947952577 and slide 2: http://diagnosticpathology.slidepath.com/dih/webViewer.php?snapshotId=13617948230017.

Introduction

Immunohistochemically (IHC) stained tissue samples are used by pathologists to establish the diagnosis and the prognosis and the treatment in various types of cancer [14]. The evaluation process takes into account the amount of immunopositive cells (membrane, cytoplasm or nuclear staining) and the architecture of the tissue sample. Such evaluation can be done by the experienced pathologist directly via microscope or from digital images of the samples.

The human direct evaluation is irreproducible, time-consuming as well as intra- and interobserver error prone [5]. So different automated methods, based on the digital image processing are proposed, as they promise the improvement of evaluation reproducibility and they can become tools for inter- and intralaboratory unification in cut-offs and threshold levels [6].

To make the validation more accurate and precise, the image segmentation should indicate cells’ membrane, cells’ cytoplasm and/or nuclei and/or other organelles (e.g. the lysosome) efficiently and robustly [79]. The error in objects detection ought to be as small as possible and should be given explicitly since it determines errors in features important in the process of diagnosis. Errors in objects detection influences objects morphology evaluation, pattern of objects’ distribution and texture features which reflects chromatin distribution [10, 11].

Segmentation of the images of stained tissue samples is a complex problem, because of huge variability of shapes, size and colour in the objects of interest and in the general architecture of the tissue samples. So far, there have been developed many methods, which detect objects of interest in these types of images, by many groups [1218]. These methods come from various segmentation approaches and present various advantages and disadvantages. The main obstacle is that all these methods are validated by their authors on their experimentally captured images. There is lack of any comparative study which answers a question of usefulness, efficiency and reproducibility of the particular method, applying it to the particular type of tissue and/or staining processes. Using comparative study on fixed images’ database it is possible to achieve result even if a very small difference in results of segmentation is expected.

The comparative study of results of various methods of segmentation has been performed for the fluorescent microscopy images of living cell images [6], for the stained tissue section in neuroblastoma cancer (Ki67) [8] and breast cancer cells (estrogen/progesterone status) [1]. In the case of fluorescent microscopy images segmentation, the Lehmusola and co-workers [19, 20] proposed evaluate segmentation method using set of synthetic images constructed by prepared software with assumed objects’ border position. The averaged multiple manual segmentation results were treated as reference “true” in the case of the other comparisons. Because comparison results for fluorescent images allows their authors to detect small differences in method performance, it was decided to use synthetic images to compare chosen segmentation methods. This paper presents the method of artificial tissue sections images construction. In this method the position, shape and colour of objects and background are generated according to statistical model constructed based on observation the set of experimentally acquired images and on the physics of digital image acquisition and microscope image erasing. In this paper the follicular lymphoma cancer tissue sections immunohistochemically stained with 3,3’-diaminobenzidine (DAB) and contra-stained with hematoxylin (H) are under interest. The images captured from several tissue sections and from various camera and microscope sets are used to gain the knowledge about images features and characteristics.

The reliable evaluation of the chosen adaptive threshold methods of segmentation is the main goal of this investigation. The results of this study will serve as the background for developing of a new hybrid method in the next step of our investigation. But there is the additional aim of this paper: to present usefulness of the images synthesis method in evaluation and comparison of the image processing results. The synthetic images maintain features of experimental images such as level of noise, range of colour and tones, vignietting, and so on in controlled degree what gives researcher possibility to observe the influence of all the features and each feature separately on the result of image processing methods.

Next section “Related works” shows the review of principia of the automated approaches developed so far and used for various types of cancer tissue sections evaluation. The following section contains description of the characteristics of experimentally acquired lymphoma section tissue images stained with DAB & H. In total, six methods of segmentation are introduced in the other section. The experimentally collected and synthesized artificial images are presented subsequently. The validation of the methods and the results of their comparison are described in the section entitled “The results of the adaptive threshold method comparison”. The discussion and conclusions are presented in the last section.

Related works

First systems for microscopic image analysis in histopathology, e.g. iPATH or UICC-TPCC [21], have been established as academic projects. The following steps have been performed in these systems: sampling, segmentation and calculation of chosen features which are determined among normal, benign and malignant cells or cells’ nuclei. The System EAMUS [22] followed the systems described above. It was dedicated to the digitalized glass slides, called virtual slides, for telemedicine which was designed as remote systems connected by internet with automatic image measurement systems to consults physicians and scientists. Its successor was developed under MATLAB and Java platform by Markiewicz [23] as a system for specific markers and pathologies. Both these systems are applied within the telepathology projects framework as a tool for verification the idea of the examination of microscopic images from a distance. Next semi-automatic computer-assisted systems for histopathology and immunohistopathology have become commercially available from DAKO and Aperio. But they are used as the virtual multiresolution slides constructors rather, than the sample or object in sample classification systems. More oriented towards feature evaluation is system proposed by Bueno [24] as parallel solution for high resolution histological and immunohistchmemicaly stained tissue section images. What was learned from the use of all systems described above is that the automatic image segmentation, as the bottle neck of the computer-aided image analysis method is the most complex and challenging step in both histopathological samples images of paraffin tissue sections and also for cytological smears [2527]. There are some complex and sophisticated algorithms [8, 12, 14, 2830], which have been developed and tested for various markers used in digital images of the histopatological samples apply to various tissues in various pathology. All of them use various threshold methods on selected or modified colour information separated from RGB digital images. Some of them use blue channel only as it gives greatest contrast between brown and blue but loose information about brown colour spread in G an R channels [31], the other propose combination of all channels of RGB as: -“brown axis” = B-0.3*(R+G) Tadrous 2010, [32], -colour deconvolution in which three well defined colour vectors, describing new colours in old colour space, should be achieved as calibration information (Ruifrok and Johanston 2001) [3335], -“de-staining” algorithm separating up to three visually distinct colours to effect selective contrast [32]. Minority of algorithms uses HSV colour model in which detection of the brown colour can be simply rotation of the hue axes by Kuse [36]. All of threshold methods suffer from a lack of universality as they are adjusted by specifics image parameters: level of contrast [3739] or degree of saturation [8] and so on. It is observed that changes in image characteristics caused by tissue variability or more often by optics and camera settings cases moderate results of segmentation [15, 40]. This paper compares the results of chosen thresholding methods applied to three types of colour information captured form RGB digital images: (1) B channel, (2) brown axis and (3) deconvolution to separate brown channel. It allows us to analyze which thresholding method is effective towards which type of colour information if brown objects in DAB & H staining lymphoma tissue section should be selected.

The characteristics of experimentally acquired lymphoma section tissue images stained with DAB & H

Digital images of tissue section of paraffin embedded lymphomas where captured in a brightfield microscope. These images differ in colour ranges, pattern of object - cells’ nuclei - distribution as well as in local and global contrast and brightness. Figure 1 (top-left) shows the image collected in the Hospital de Tortosa Verge de la Cinta using the indirect immunohistochemical primary antibodies against FOXP3 and the secondary antibodies which include the peroxidase block, labeled polymer, buffered with substrate/DAB+ chromogen and finally contra-stained with hematoxylin. All images show the brown end products for the immunopositive cells’ nuclei among blue colour nuclei for the immunonegative cells.

Figure 1
figure 1

Experimentally collected image. The experimentally collected image (top-left) and its B-channel of RGB (top-right), its “brown” axis (bottom-left) and its brown map after colour deconvolution (bottom-right).

The singular brown objects as well as the small clusters of brown objects, surrounded by blue ones, are observed in images. Nuclei are touching, not overlapping one another, in the clusters. Variation in blue and brown colours, as well as variation in objects density in one image and from one image to another, is observed. The inside of brown objects is visible as almost homogeneous, with smooth and slightly visible texture, while the inside of blue objects seems to be filled mostly with curly texture. Cells’ nuclei marked with FOXP3 are nuclei of regulatory T-cells, it means immune system cells, so their distribution of size is similar to normal T-cells’ population (distribution with small range and sharp peak), while distribution of most of the blue nuclei cells’ population is typical for cancer cells’ population (tumoral B lymphocytes). But some image features hinder the segmentation process, e.g. a presence of:

  • spurious stain deposits in other types of cells: stromal, scar, lymphocytes;

  • very dark parts of blue stained nuclei;

  • partly blurred nuclei border with the colour rim caused by the chromatic aberration;

  • colour noise.

Some non-homogeneity of light distribution in a single image is observed: the middle part is brighter than the peripheral one. Even images collected by one pathologist, using the particular microscope and camera, differ one from another. It is caused by random changes in external light conditions and chosen parameters of image acquisition.

All features of images and objects of interest described above, observable in Figure 1 (top-left), cause that adaptive threshold methods of segmentation are adequate to the situation. Six adaptive methods of threshold, locally adjusted to the contrast, originally defined for documents and the text segmentation, have been adjusted to analyze three versions of colour information extracted from images with objects in various shades of brown among blue textured spots on the off-white background. The chosen threshold methods, the method of comparison and the results of thresholds are presented in the next sections.

Methods

The chosen methods of segmentation

Image segmentation can be considered as the process of dividing an image into multiple components [41, 42]. It is usually used to separate objects from the background. There are many forms of image segmentation: thresholding, clustering, transform and edge or texture based methods. The segmentation as some delimitation of boundaries between compartments in this case is limited to detect a hypothetical (not existing in real word) line between nucleus and surrounding cytoplasm or stroma. Because of contrast fluctuation between objects of interest and background across image plane and from image to image the locally adaptive thresholding methods seems to be appropriate. The method which have been defined for text detection in scanned digital documents deal with grayscale images with Gaussian and uniform noise characteristics and with big contrast. Although the acquired images are 3-channel RGB images, the segmentation algorithms treat separately monochromatic images containing separated information of brown colour:

  • the blue channel from RGB, presented in Figure 1 (top-right), because of the results of the analysis of cells’ nuclei profiles presented in Figure 2;

  • the “brown channel” calculated from RGB image which is presented in Figure 1 (bottom-left);

  • the results of brown colour deconvolusion from RGB image which can be observed in Figure 1 (bottom-right).

Figure 2
figure 2

Comparison of blue and brown objects. The presentation of magnified brown object from experimentally collected image (top-left) and its line profile (top-right) and the blue object (bottom-left) with its line profile (bottom-right).

All images which have been prepared to the comparison are transformed to obtain introduced three versions of each image. All tested methods are implemented in MATLAB [23] and used to calculate results of segmentation for all version of colour information.

Locally adaptive thresholding

Local threshold is calculated at every point of image with sliding window image processing. Threshold value is based on the intensity of the pixel and its neighborhood [43]. In this paper it is considered: two local variance methods, three local contrast methods and one center-surround scheme. All expressions used in algorithms presented below are described in Table 1.

Table 1 Expressions

Niblack

The most basic adaptive threshold method is Niblack method [44] and it belongs to the group of local variance methods. Local threshold is calculated based on mean and standard deviation of local neighborhood of size set by the parameter w. Another applied parameter k introduces bias of variance value.

B ( x , y ) = 1 if I ( x , y ) > T ( x , y ) , 0 otherwise . , where T ( x , y ) = m w × w ( x , y ) + k · σ w × w ( x , y )
(1)

These two parameters of Niblack method values and the rest of used parameters values are presented in Table 2.

Table 2 Parameters

Sauvola

Method presented by Sauvola and Pietaksinen [45] is another local variance method and can be treated as modified version of Niblack’s local variance method. It is based on one more parameter (R) which introduces the variance standardization value.

B ( x , y ) = 1 if I ( x , y ) > T ( x , y ) , 0 otherwise . , where T ( x , y ) = m w × w ( x , y ) + 1 + k · σ w × w ( x , y ) R 1
(2)

White

The method presented by White and Rohrer [37] separates objects from background if the value of the analyzed pixel multiplied by the bias parameter is greater than mean value of neighborhood it is considered as an object. Basically, if the pixel is considerably darker than its surrounding, it is considered as an object.

B(x,y)= 1 if m w × w ( x , y ) < I ( x , y ) · bias, 0 otherwise .
(3)

Bernsen

Another local contrast method is offered by Bernsen [38], as two stage method. Contrast value as a difference between the maximum and minimum value in neighborhood is calculated during first stage of calculation. In second stage threshold value is calculated as a mean of the minimum and maximum value in neighborhood of the analyzed pixel if the contrast value was high enough (over assumed T c value).

B ( x , y ) = 1 if I ( x , y ) > T ( x , y ) , 0 otherwise . , where T ( x , y ) = max w × w ( x , y ) + min w × w ( x , y ) 2 if C w × w ( x , y ) T c , 0 otherwise .
(4)

Yasuda

The Yasuda, Dubois and Huang’s method [39] is local contrast method and consists of four steps [46]. Step 1. Increasing dynamic range in the image.

I 1 (x,y)= I ( x , y ) min ( I ) max ( I ) min ( I )
(5)

Step 2. Nonlinear smoothing. Replace pixel with average value (m n b ) of its (3 by 3) neighbourhood if local range is below assumed value of T1.

I 2 (x,y)= m nb if ( max ( nb ) min ( nb ) ) < T 1 , I 1 ( x , y ) otherwise .
(6)

Step 3. Primary thresholding with course marking of background. For every pixel its neighborhood is taken and if its local contrast is not greater than assumed value of T2 or value of the pixel is greater than average of neighborhood. Wherever condition is met, it is flagged as background. For every other pixel the given calculation is performed.

I 3 ( x , y ) = 1 if m w × w ( x , y ) < I 2 ( x , y ) c w < T 2 I 2 ( x , y ) min w × w ( x , y ) c w otherwise . where c w = max w × w ( x , y ) min w × w ( x , y )
(7)

Step 4. Secondary thresholding with precise segmentation to classify rest of the pixels. Sliding window image processing uses 3 by 3 window. In this step the pixel is marked as background if minimum from neighborhood is not greater than assumed value of T3 or variance is greater than assumed value of T4.

B(x,y)= 0 if min 3 × 3 ( x , y ) < T 3 σ w × w ( x , y ) > T 4 1 otherwise .
(8)

Palumbo

The last but not least tested method designed by Palumbo, Swaminathan and Srihari [47] is using center-surround scheme. The sliding window is divided symmetrically into 9 smaller windows, but only 5 of those are used in computations. A c e n t e r is near neighborhood and 4 diagonal windows are far neighborhood (A n e i g h ). The tested pixel is supposed to be treated as object when the central window contains the foreground object and the neighboring windows are filled with background.

B ( x , y ) = 0 if I ( x , y ) < T 1 T 2 · m ( A neigh ) > m · ( A center ) , 1 otherwise .
(9)

Hybrid methods

Niblack and Sauvola methods appear to be insufficiently sensitive in case of ICH images and they were modified for a better use. It was done by adding the contrast condition similar to that defined in Bernsen method.

Hybrid of Niblack and Bernsen

Under the contrast condition defined by Bernsen method the threshold value is calculated using the equation defined by Niblack method.

B ( x , y ) = 1 if I ( x , y ) > T ( x , y ) , 0 otherwise . , where T ( x , y ) = m w × w ( x , y ) + k · σ w × w ( x , y ) if C w × w T c , 0 otherwise .
(10)

Hybrid of Sauvola and Bernsen

Under the contrast condition defined by Bernsen method the threshold value is calculated using the equation defined by Sauvola method.

B ( x , y ) = 1 if I ( x , y ) > T ( x , y ) , 0 otherwise . , where T ( x , y ) = m w × w ( x , y ) + 1 + k · σ w × w ( x , y ) R 1 if C w × w T c , 0 otherwise .
(11)

From this point onward, reference to the Niblack and Sauvola methods means their respective Hybrids with Bernsen method. After a successful segmenting the image, a simple postprosessing is done. The used postprocessing consist of tresholding by size where every object with area lesser than 900px is discriminated from outcome image.

The methods of comparison of the chosen segmentation methods results

Testing synthetic images were paired with their corresponding binary representation (template) where assumed shape and location of positive cells’ nuclei are marked. Taking into account the binary image as a reference following measurements are possible: - true positive (TP), - true negative (TN), - false positive (FP), - false negative (FN), basing on template and results of each segmentation method.

Based on these parameters, statistical measurement of the performance of segmentation methods can be calculated:

Sensitivity

S= TP TP + FN
(12)

Specificity

P= TN TN + FP
(13)

Dice’s coefficient

r D = 2 · TP 2 · TP + FN + FP
(14)

Jaccard’s coefficient

r J = TP TP + FN + FP
(15)

Sokal and Sneath’s coefficient

r SS = TP TP + 2 · FN + 2 · FP
(16)

Rogers and Tanimoto’s coefficient

r RT = TP + TN TP + TN + 2 · FN + 2 · FP
(17)

To analyze agreement between results of segmentation and ‘true’ value presented by template the Bland-Altman plots (B-A plots) were produced for 70 objects segmented for each method (6) and each type of colour information (3) and for selected feature (5) e.g. area, axis ratio of the ellipse fitted to object, roundness, solidity and eccentricity. The results of the analysis of 90 plots encouraged us to develop our own parameter which allows us to find any bias or presence of outliners in cretin aspect of method performance. This parameter was defined as the sum of false positive (FP) and false negative (FN) areas divided by area of ‘true’ object observed in the function of distance between centroids of the ’true’ objects and segmented object. Plots similar to B-A plot but comparing the centroids distance with the sum of FP and FN divided by area of ‘true’ object allow identification of objects with specifically distributed erroneously detected pixels. When the distance between centroids has small value while second parameter has big value it means that extra detected or undetected area is homogeneously distributed around the object otherwise badly detected or undetected area is located in such a way that detected area centroid moves away from template object centroid. It allows us to determine if any of examined methods presents any stable or occasionally occurring bias in erroneously detected area.

The experimentally collected and the synthesized artificial images

The experimentally collected images

The variability in appearance of the tissue section in images stained with DAB & H is remarkable due to: (1) inherent features of tissue and variability of morphology in pathological cases, (2) inherent variability of results of the staining process and (3) inherent microscopic deformations as well as introduced artefacts and noise.

The morphology of pathological follicular lymphoma tissues varies [48]. Besides the different pathological manifestations, the variability in appearance of staining samples increases during the tissues preparation. This procedure is standardized but has a non-deterministic nature because the number of chemical particles of the stain bound to the nucleus is random. It implicates variation in the brown colour, from the intensive orange, through the intensive brown to the dark brown in immunopositive nuclei [8, 14]. The paper deals with samples immunohistochemically stained against FOXP3, which indicates nuclei of regulatory T-cells [3]. This type of staining procedure produces brown objects (immunopositive nuclei of regulatory T-cells) among blue objects (mostly immunonegative nuclei of tumoral B lymphocytes). Examination of the lymphoma samples leads to score the number of regulatory T-cells in the cancer tissue, what allows estimating this specific organism’s immune response to this type of cancer.

In case of automated evaluation of tissue samples, the image acquisition should be done. Because of chosen microscope and camera settings: white balance, brightness, contrast and inherent inhomogeneity in light distribution, as well as some obstacles in the light path and noise added by microscope and camera [31], variability in nuclei appearance increases. Experimentally collected images have been acquired via a brightfield microscope (Leica DM LB2 upright light microscope, Leica Microsystems Wetzlar GmbH, Wetzlar, Germany), with 40x plane-apochromatic objective of numerical aperture 0,63. 60 images captured by the experienced pathologist from 60 areas of various complexity of the several samples have been collected in Tortosa hospital. 5 images, randomly chosen from the experimental data, have been used as the models to construct their synthetic counterparts.

The synthesized artificial images

To compare results of any segmentation methods, the exact position of the boundary of objects should be known. Information of the nuclei position is available for artificial images, which are constructed via the simulation of the cells’ population.

The process of artificial image construction is proposed as follows: random generator chooses the position of brown and blue objects (immunopositive and immunonegative cells’ nuclei) in image plane according to the founded probability distribution of their shape and size. These distributions are estimated using collection of experimentally acquired images. The number of both types of objects, colour tones, texture of objects and background are taken from experimentally collected counterpart image as samples and numbers characteristic for the particular image. Spots of the clean background are captured to the synthetic image background layer and enlarged to form continues layer on which objects layer are located. Synthesis of objects layers is done using the adjusted version of SIMCEP software and Camera Raw 4.1 module of Photoshop CS5.

The SIMCEP, developed by Lehmussola and co-workers [19, 20], is available via internet. The software is dedicated to synthesize the full colour fluorescent microscopic images of nuclei or cells’ culture. For the needs of this paper it has been adjusted to simulate images from the transmission light microscopy. The core of SIMCEP system, the generator of nuclei according to distribution of their shape and size, the template generation, the texture constructor and microscope and camera signal degradation module have been used, while problems with the specific background characteristics have been solved in Photoshop.

Five experimentally acquired images of lymphoma tissue samples become the models of five artificial images, constructed as the RGB 24-bits colour synthetic microscopic images stored in uncompressed tif files. The artificial image presented in Figure 3 (top-left) has been synthesized based on the model image, presented in Figure 1 (top-left), using the template of the immunopositive cell’s nucleus position and size presented in the image in Figure 3 (bottom-left). To compare synthetic image and its counterpart image characteristic full images are presented in Figure 3 (top-left) and Figure 1 (top-left) respectively while magnified fragments of both images are presented in Figure 3 (top-right) to show details in object and background characteristics. Also, Table 3 with results of statistical comparison is provided.

Figure 3
figure 3

Synthesized artificial image. The artificial image (top-left) constructed as counterpart of image shown in Figure 1; enlarged fragment of experimentally collected image compared to the artificial image (top-right); the template of brown objects (bottom-left); the blue objects template with added Perlin texture (bottom-right).

Table 3 Objects characteristics

The number of brown, marked nuclei are adjusted to the particular experimentally collected image and the templates of all nuclei location generated using SIMCEP are presented: (1) in Figure 3 (bottom-left) - immunpositive in the form of template and (2) in Figure 3 (bottom-right) - immunonegative in the form of the textured by Perlin noise map. The colours are separated form the immunopositive and immunonegative nuclei and from the background of the counterpart image after the reduction of noise and chromatic aberration in Camera Raw. All layers (background layer, brown objects of interest layer and blue nuclei layer) are put together in Photoshop. Each step of artificial image signal degradation, typical for the microscope and camera technical limitations, such as noise, vignetting and blurring, are simulated by the SIMCEP software, except of the chromatic aberration added in Camera Raw.

The results of the adaptive threshold method comparison

All chosen adaptive threshold methods are applied to three types of images calculated based on full colour synthetic image (see Figure 4 top-left image):

  • B channel of RGB colour image in Figure 4 (bottom-left),

  • monochromatic image calculated accordingly to the presented earlier equation as brown component extracted from all RGB channels in Figure 4 (bottom-right),

  • brown part of image obtained by colour deconvolution with three colours: blue, brown and the rest called the third component in Figure 4 (top-right).

Figure 4
figure 4

Artificial image and three types of images calculated based on full colour image. The artificial image (bottom-left) constructed as described in article and its B-channel of RGB (bottom-right), its “brown” axis (top-right) and its brown map after colour deconvolution (top-left).

5 artificial images (from A to E) segmentation results for all objects in image (without rejection of the objects touching image border) are presented as number of found objects, the sensitivity, the specificity and four coefficients of similarity in Tables 4, 5 and 6. In Table 4 are presented results for monochromatic images constituted as B-channel, Table 5 presents results for monochromatic images constituted by deconvolution and Table 6 presents results for monochromatic images constituted as brown color extracted from RGB channels. These tables show that results for each artificial image are close for each method of segmentation applied to particular image. Generally the best results are those of segmentation applied to brown component after colour deconvolution, the mean and the standard deviation of the value of the number of found objects calculated as difference between the number of found objects and the number of ’true’ objects in template for all segmentation methods is -0.2 ±0.6 while 2.3 ±6.9 for monochromatic image with brown color extracted from RGB called ’brown channel’ and 6.0 ±13.9 for the blue channel from RGB. The mean of the sensitivity calculated for all segmentation methods is 0.9264 ±0.0611, 0.8366 ±0.1571, 0.9432 ±0.0764 respectively while mean of the specificity calculated in this data are 0.9981 ±0.0035, 0.9858 ±0.0264, 0.9886 ±0.0235. So 5 artificial images are similar one to each other and all objects in all images can be treated as homogeneous population of tested objects.

Table 4 The results of adaptive threshold comparison computed on channel BLUE (channel BLUE from RGB)
Table 5 The results of adaptive threshold comparison computed on brown colour images after colour deconvolution of RGB image
Table 6 The results of adaptive threshold comparison computed on the “brown channel” calculated from RGB image

The next step of comparison and evaluation concerns rather methods of adaptive threshold so it have been done on the level of single object (not single image). Because objects that touch borders are segmented with holes or cavities what cause that in most cases these object disappear during the step of size filtering in further evaluation it was taking in to account only these objects which do not touching image border. As new designed method will be applied to the virtual slides which will be analysed by parallel algorithms dealing with images which are fragments of virtual slides selected with covering margins so the rejection of objects touching image border would be compensate on the level of results connection.

The evaluation of the segmentation results of single object is presented as B-A plots for such objects’ features as area of object, roundness, eccentricity and so. The comparison of objects’ area in pixels for all except one segmentation methods (for five methods) calculated for each of 3 types of monochromatic images collecting various information about brown colour from five true colour artificial images are presented in Figure 5A-I. The Yasuda method was excluded from presentation because of its performance; it does not select certain fraction of object and at the same time it selects essential fraction of false positive objects for all types of images (for blue channel 103, for brown colour 63, for results of colour deconvolution only 2) so its plots are not presented in the paper. Some of the plots in Figure 5 (A, B, C, D, G and H) consist of about 70 non-touching image border objects from 5 synthetic images, while the others (E, F and I) present combined plots showing distinguishable by colours 3 or 4 methods’ results together. In Figure 5 and Figure 6 objects segmented by the Niblack method are presented in red, by the Sauvola method in blue, the Bernsen method in green, the White method in black and the Palumbo method in yellow.

Figure 5
figure 5

The Bland-Altman plots of area feature. The Bland-Altman plots of area feature. (A) Bernsen method of segmentation applied to image after colour deconvolution; (B) Sauvola method of segmentation applied to image after colour deconvolution; (C) Sauvola method of segmentation applied to image after colour deconvolution, presentation of true positive objects only; (D) White method of segmentation applied to ‘brown channel’; (E) Niblack, Sauvola, Bernsen and Palumbo method of segmentation applied to ‘brown channel’; (F) Niblack, White and Palumbo method of segmentation applied to image after colour deconvolution; (G) Bernsen method of segmentation applied to blue channel of RGB; (H) White method of segmentation applied to blue channel of RGB; (I) Niblack, Sauvola and Palumbo method of segmentation applied to blue channel of RGB. [colour representation: Niblack - red, Sauvola - blue, Bernsen - green, White - black and Palumbo - yellow].

Figure 6
figure 6

The Bland-Altman plots of shape features. The Bland-Altman plots of shape features. (A) solidity, Bernsen method of segmentation on image after colour deconvolution; (B) roundness, Bernsen method of segmentation on image after colour deconvolution; (C) perimeter, Bernsen method of segmentation on image after colour deconvolution; (D) solidity, White and Bernsen method of segmentation applied to blue channel of RGB; (E) roundness, White and Bernsen method of segmentation applied to blue channel of RGB; (F) perimeter, White and Bernsen method of segmentation applied to blue channel of RGB; (G) solidity, Sauvola method of segmentation on image after colour deconvolution; (H) roundness, Sauvola method of segmentation on image after colour deconvolution; (I) perimeter, Sauvola method of segmentation on image after colour deconvolution. [colour representation: Sauvola - blue, Bernsen - green and White - black].

It is visible in Figure 5 that results of almost all methods applied to images after colour deconvolution (A, B, C, F) are better than applied to blue channel of RGB (G, H, I) and to the brown component extracted from all channels of RGB (D, E); the latter seems to be the worst. Generally, it is visible that some B-A plots of area comparison between template objects and detected objects show systematic under-segmentation of area. Bernsen method (Figure 5A) and Niblack, Palumbo, and White methods (Figure 5F) applied to images after colour deconvolution and White method applied to brown component monochromatic image (Figure 5D) and to blue channel of RGB (Figure 5H) shows that there is a bias in the segmented object area. This bias is visible as objects’ area decrease in comparison to the corresponding template object area but all these method are accurate and precise in objects number. For the Bernsen method accurate and precise both are equal 1 while for the modified Sauvola method are equal 1 and 0.9722 respectively. At the same time the size of object detected by: Sauvola method applied to image after colour deconvolution (Figure 5B), Bernsen method applied to the blue channel from RGB, Palumbo method also applied to the blue channel and Yasuda method applied to all three types of monochromatic images (not presented in paper) seems not biased in objects’ area detection. But some of methods mentioned above in various degree detect extra objects in background (false positive object, FP). For the Sauvola method the number of FP objects is minimal (2 from 72) while for the Yasuda method these numbers are vast as it was mention above. These results are the reason that the Yasuda method is excluded from further consideration. To find method which is accurate enough in area detection the comparison as B-A plots, between area of the segmented and the ‘true’ object from template, is done. The difference between area of the segmented and the ‘true’ object from template for the Sauvola method applied to the result of image deconvolution for all selected object (Figure 5B) are ranged between -100 to 1400 pixels and for true positive objects only (Figure 5C) between ±80 pixels while the Bernsen method applied to blue channel of RGB (Figure 5G) and the Palumbo method (yellow circles in Figure 5F) applied to blue channel of RGB are ranged in ±130 pixels and ±170 pixels. So the error in area detection is the lowest if the objects are selected by the Sauvola method but only if false positive object are excluded based on the other information.

To reject extra objects selected by the Sauvola method two sources of information could be used: - from biased in object size segmentation method which produce accurate and precise result in number of detected objects so these results can be used to mark true positive object among the Sauvola method results or - from objects found by the Sauvola method can be filtered by any or by all of described below shape coefficients classifier.

To find segmentation method that gives precise number of detected objects and at the same time decrease objects’ size by homogeneous area rejection around objects’ periphery, only methods applied to image after colour deconvolution (Figure 5A,F) or blue channel (Figure 5G,H,I) should be taken into consideration. B-A plots for the area feature for monochromatic image from brown color extracted from RGB (Figure 5E) shows rather biased results (from -100 to -350 pixels) because of presence of cavities and holes in large fraction of segmented objects. So the following three methods: the Bernsen method applied to the results of colour deconvolution (Figure 5A) and to blue channel of RGB (Figure 5G) and the White method applied to blue channel (Figure 5H) are taken into consideration.

The choice among previously mentioned methods and/or among the shape determined object filtration are examined based on B-A plots comparing shape features: perimeter, solidity, roundness and axis ratio, and two features which describe relative position (co-localization) of segmented and template objects: eccentricity and quasi B-A plots described further in this section. These quasi B-A plots show distribution of erroneously detected area (FP) as the function of the distance between centroids of selected and template objects. They have been calculated for all methods (6), all types of monochromatic image with various colour information (3) and all features (6), but only some of them, these which have impact in conclusions, are shown in Figure 6 and Figure 7.

Figure 7
figure 7

The co-localization features. The co-localization features: eccentricity (A, C) and defined by authors quasi B-A plots (B, D). (A) Bland-Altman plot of eccentricity, Bernsen method of segmentation on image after colour deconvolution; (B) quasi B-A plot (described in section “The methods of comparison of the chosen segmentation methods results”) Bernsen method of segmentation on image after colour deconvolution; (C) Bland-Altman plot of eccentricity, Sauvola method of segmentation on image after colour deconvolution; (D) quasi B-A plot, Sauvola method of segmentation on image after colour deconvolution.

B-A plots in Figure 6 present shape features (except axis ratio which results are similar to presented features): - solidity which shows if increase of objects’ size to achieve convex area is homogeneously distributed (Figure 6A,D,G), - roundness which shows if ratio of area to squared perimeter is independent from objects’ roundness (Figure 6B,E,F) and - perimeter length which shows if the changes in perimeter length compared to the template objects perimeter are independent from perimeter length (Figure 6C,F,I). All these features are presented for the Bernsen method applied to the image after colour deconvolution (Figure 6A,B,C) in the context of the plots of sum of the Bernsen and the White methods applied to blue channel of RGB image (Figure 6D,E,F). The first method plots present much more homogeneous distribution than the second group of plots which are presented below (respectively Figure 6D,E,F). These three shape features plots proof that error in object area detection (decrease of object size described above) for the Bernsen method applied to image after colour deconvolution is homogeneously distributed around object and do not affect its shape. Plots of B-A presented in Figure 6 (G, H, I) present also all previously described shapes coefficient for the Sauvola method applied to the result of image deconvolution. The values of false positive objects appear to be drastically different than the values of these coefficients for true positive objects. Based on this knowledge it is possible to form criteria (classifier) of false positive objects rejection from the set of results. So the Bernsen and the Souvola methods applied to result of deconvolution and shape coefficients (mainly solidity or perimeter) are the best candidates to be used in new hybrid method construction but only if the Bernsen method results of true positive objects indicate part of the Souvola method results.

B-A plots in Figure 7 presents co-localization features: eccentricity (Figure 7A,C) and defined by authors new coefficient (Figure 7B,D) which shows if the distance between two centroids is correlated with the ratio of the sum of false negative and false positive pixels divided by true positive pixels. Eccentricity defined as the ratio of the distance between the foci of the ellipse and its major axis length is calculated for ellipse that has the same second-moments as an object. Homogeneous distribution of error without any bias both for the Sauvola and the Bernsen method for eccentricity is achieved. It shows that erroneously detected area in both cases does not cause significant changes in ellipse which is an estimate of object. As this information do not tell us if errors in detected area moves centroid position more than within circle of reduce equal 1 pixel the new B-A like plots have been analysed. These plots are presented in Figure 7 (B, D) and they show that fraction of object which in consequence of error in peripheral part detection moves centroid of segmented object in comparison to the corresponding template object of distance between 1 and 2.5 pixels is less than 20% of objects (for the Bernsen method 12 objects from 70 but for the Sauvola method 14 objects from 72). So in most results of the Bernsen and the Souvola methods the error in area detection is homogeneously located on peripheral part of object if we applied these method to the monochromatic image after colour deconvolution. It proofs that the Brensen method results can be used as true positive objects markers (particularly if they are eroded using mathematical morphology operation [49, 50]) and these markers should indicate inside of some of the Sauvola method results; all objects which are not marked are FP objects and can be rejected.

Figure 8 presents segmentation results calculated for the chosen fragment of image shown in Figure 3 (top-left) more detail for all types of the monochromatic images: in the first raw for B-channel, in the second raw for the result of deconvolution and the bottom raw for the results of brown component extraction. These results are presented as the various colour outlines of the detected objects. In left column of Figure 8 there are results of four methods: (1) Niblack method, in red colour, (2) Yasuda method, in green colour, (3) Palumbo method, in gray colour, and (4) Sauvola method, in blue colour. While in the right column there are only two: (1) White method, in red colour, and (2) Bernsen method, in green colour. Other colours which appear in image arising by the low of primary colour adding only for the overlapping outlines: yellow colour as result of green colour added to red colour, magenta colour as result of blue colour added to red colour, cyan colour as result of green colour added to red colour and white colour as result of adding all tree colours. The left part of each image is imposed on the template, while the right part, without the template. Both parts show the mutual localization of the detected lines relative to each other and to the template objects. Visual evaluation of the Figure 8 shows that template cover almost all detected objects outlines because detected object are smaller o just in size of template object so the difference of particular method results can be observed in right part of each image. All white pixels in left parts of all images and all yellow pixels in right parts shows agreement in selected outlines while the lines in other colours shows distance between results. These distances are relatively small for results of the segmentation performed with monochromatic image which is results of deconvolution and which is B-channel image (Figure 8A-D). There is presented only one FP object segmented by the Sauvola method in Figure 8C while in Figure 8A there are much more FP objects (in green colour) segmented by the Yasuda method. So all method of results comparison strengths our belief that the process of colour deconvolution produce monochromatic image with best performance of brown colour component.

Figure 8
figure 8

Image segmentation results. The sub-images present overlapped results of adaptive threshold methods in the left column for: the Niblack method (in red), the Sauvola method (in blue), the Yasuda method (in green) and the Palumbo method (in gray) and in the right column for: the White method (in red) and the Bernsen method (in green). The top row (A and B) presents results calculated on B-channel of RGB, the middle row (C and D) presents results calculated for the brown map after colour deconvolution while the bottom row (E and F) for the “brown axis” in RGB. The other colours appearing in image should be identified according to the law of primary colour adding as overlapping outlines. The left part of each sub-image is imposed on the template what causes that inside of object there is white colour while the right part, shows the mutual localization of the detected lines on dark gray instead of black background.

Discussion and conclusions

The investigation presented in this paper has two aims: (1) to compare the chosen adaptive threshold method on immunohistochemically stained lymphoma tissue sections to collect the knowledge how to design the new method based on the local thresholding methodology, and (2) to prove usefulness of creating artificial images which simulate experimentally acquired microscopic images used for the objective validation of image processing methods. The first goal has been achieved because results of all tested adaptive threshold methods except for the Yasuda method appear to be good or very good (accuracy from 0.9986 to 0. 9816 and precision from 1 to 0.6773 for respectively the Bernsen method and the Palumbo method applied to B-channel and to the White method applied to B-channel and for the Palumbo method applied to the result of the colour deconvolution) when accuracy and precision are quantifying based on pixels classification. The best accuracy and precision (respectively 0.9945 and 1) is for the White method applied to B-channel of RGB but this method decreases the size of segmented objects and sometimes reject objects that touches image edges. The accuracy and precision for both chosen methods are 0.9892 and 0.9331 for the Sauvola method and 0.9864 and 0.8454 for the Bernsen method calculating it from an area. But calculating it based on the number of selected objects for the Bernsen method accuracy and precision both are equal to 1 while for the modified Sauvola method are equal 1 and 0.9722 respectively.

All tested methods produce results based on various criteria but all uses the same size of the sliding window of image processing algorithm around classified pixels (in this investigation window size is 51x 51 pixels because of object size) and the same value of minimal contrast for object and background (in this investigation T c =150):

  • the Bernsen method uses only these two parameters but it generally produces various threshold level across image plane, adjusting it to the mean value of two numbers: the maximum and the minimum of intensity in window; if local contrast is bigger than Tc the threshold value is settled on the level on locally adjusted value if not the background is detected;

  • the hybrid of Sauvola method classifies objects according to description above using two other parameters: k=−0.2 which introduce bias in variance value and R=128 which allows to standardize variance value; this method also produce locally adjusted threshold level according to mean intensity value in window corrected by biased and standardized variance; if local contrast is bigger than Tc the threshold value is settled on the level on locally adjusted value if not the background is detected;

  • the White method classifies object also according to pixels mean intensity value inside window but classifies it as belonging to the object if intensity of analyzed pixel multiplied by bias parameter (in this investigation b i a s=2) is bigger than mean intensity value calculated inside window, what is essential in this method that threshold level is also locally adjusted but local threshold value is dependent from mean intensity value in window and from chosen constant bias.

The local threshold level in the White method is dependent on bias which increase intensity of analysed pixel causes that the method perform well in images with high contrast between objects and background. The highest contrast is observed in blue channel monochromatic image despite the fact that texture present in blue objects locally disturbs this contrast. The other two methods are dependent on mean intensity corrected by the variance for the Sauvola method and on the half of intensity range inside window for the Bernsen method what causes that they are less dependent from the value of contrast but rather dependent from lack of local contrast disturbance. This is observed in monochromatic images after deconvolution where texture of blue object is rejected and texture in background is really weak. Both methods applying to the images after colour deconvolution produce complementary results. It derives from the fact that the corrected by standardized variance mean value of the intensity is sensitive enough to detect less conduced brown colour regions. It means that it can detect blurred edges of objects and at the same time it detects gentle contrails of stain deposits in the background while the half of the intensity range cut all blurred fragments of objects and do not detect stain deposits in the background. So it leads to the conclusion that the new developed method should take advantage from both the Bernsen and the Souvola methods in precision and accuracy of object detection and working synergistically it rejects all errors e.g. extra objects.

The evaluation of performance of 6 adaptive threshold methods, on three types of monochromatic images, based on 5 true colour artificial images was done. So the second aim, the verification of the thesis about usefulness of the artificial image synthesis method in the image processing method evaluation and comparison, also was achieved. The known and assumed location of objects of interest in the template allows using the standard methods for the quality assessment, as specificity, sensitivity and standard coefficients of similarity, precision and accuracy and Bland-Altman analysis which work well in all comparative study. As the scientific and clinical interest in quantifying brown objects in DAB&H stained samples is evident the evaluation of the segmentation results using artificial synthesized images allows gathering huge amount of knowledge about image analysis efficiency in the context of image characteristics. This knowledge will be used during new method development in future.

References

  1. Roux L, Tutac A, Lomenie N, Balensi D, Racoceanu D, all: A cognitive virtual microscopic framework for knowlege-based exploration of large microscopic images in breast cancer histopathology. Engineering in Medicine and Biology Society, 2009. EMBC 2009. Annual International Conference of the IEEE. 2009, Minneapolis, MN: IEEE, 3697-3702.

    Chapter  Google Scholar 

  2. Grala B, Markiewicz T, Kozlowski W, Osowski S, Slodkowska J, Papierz W: New automated image analysis method for the assessment of Ki-67 labeling index in meningiomas. Folia Histochemica et Cytobiologica. 2010, 47 (4): 587-592. 10.2478/v10042-008-0098-0.

    Article  Google Scholar 

  3. Swerdlow S, for Research on Cancer IA: WHO Classification of Tumours of Haematopoietic and Lymphoid Tissues. 2008, World Health Organization classification of tumours, International Agency for Research on Cancer, [http://www.ncbi.nlm.nih.gov/nlmcatalog/101477951]. [ISBN-13: 9789283224310; ISBN-10: 9283224310]

    Google Scholar 

  4. Rojo M, Bueno G, Slodkowska J: Review of imaging solutions for integrated quantitative immunohistochemistry in the Pathology daily practice. Folia Histochemica et Cytobiologica. 2010, 47 (3): 349-354. [http://czasopisma.viamedica.pl/fhc/article/view/4337]

    Article  Google Scholar 

  5. Seidal T, Balaton AJ, Battifora H: Interpretation and Quantification of Immunostains. Am J Surg Pathol. 2001, 25 (9): 1204-1207. 10.1097/00000478-200109000-00013. [http://journals.lww.com/ajsp/Fulltext/2001/09000/Interpretation_and_Quantification_of_Immunostains.13.aspx]

    Article  CAS  PubMed  Google Scholar 

  6. Yaziji H, Barry T: Diagnostic Immunohistochemistry: What can Go Wrong?. Adv Anat Pathol. 2006, 13 (5): 238-246. 10.1097/01.pap.0000213041.39070.2f. [http://journals.lww.com/anatomicpathology/Fulltext/2006/09000/Diagnostic_Immunohistochemistry__What_can_Go.3.aspx]

    Article  PubMed  Google Scholar 

  7. Du X, Dua S: Segmentation of fluorescence microscopy cell images using unsupervised mining. Open Med Inform J. 2010, 4: 41-49. [http://europepmc.org/abstract/MED/21116323]

    PubMed Central  PubMed  Google Scholar 

  8. Markiewicz T, Wisniewski P, Osowski S, Patera J, Kozlowski W, Koktysz R: Comparative analysis of methods for accurate recognition of cells through nuclei staining of Ki-67 in neuroblastoma and estrogen/progesterone status staining in breast cancer. Anal Quant Cytol Histol. 2009, 31: 49-62. [http://www.ncbi.nlm.nih.gov/pubmed/19320193]

    PubMed  Google Scholar 

  9. Lopez C, Lejeune M, Salvado MT, Escriva P, Bosch R, Pons L, Alvaro T, Roig J, Cugat X, Baucells J, Jaen J: Automated quantification of nuclear immunohistochemical markers with different complexity. Histochem Cell Biol. 2008, 129: 379-387. 10.1007/s00418-007-0368-5. [http://dx.doi.org/10.1007/s00418-007-0368-5]

    Article  CAS  PubMed  Google Scholar 

  10. Pietka E, Kawa J, Spinczyk D, Badura P, Wieclawek W, Czajkowska J, Rudzki M: Role of radiologists in CAD life-cycle. Eur J Radiol. 2011, 78 (2): 225-233. 10.1016/j.ejrad.2009.08.015. [http://www.sciencedirect.com/science/article/pii/S0720048X09005087]

    Article  PubMed  Google Scholar 

  11. Korzynska A, Zychowicz M: A method of estimation of the cell doubling time on basis of the cell culture monitoring data. Biocybern Biomed Eng. 2008, 28 (4): 75-82. [http://ibib.waw.pl/bbe/bbefulltext/BBE_28_4_075_FT.pdf]

    Google Scholar 

  12. Choi HJ, Choi IH, Cho NH, Choi HK: Color image analysis for quantifying renal tumor angiogenesis. Anal Quant Cytol Histol. 2005, 27: 43-51. [http://www.ncbi.nlm.nih.gov/pubmed/15794451]

    PubMed  Google Scholar 

  13. Comaniciu D, Meer P: Cell image segmentation for diagnostic pathology. Advanced Algorithmic Approaches to Medical Image Segmentation. Advances in Computer Vision and Pattern Recognition. 2002, London: Springer, 541-558. 10.1007/978-0-85729-333-6_10.

    Chapter  Google Scholar 

  14. Markiewicz T, Osowski S, Patera J, Kozlowski W: Image processing for accurate cell recognition and count on histologic slides. Anal Quant Cytol Histol / Int Acad Cytology [and] Am Soc Cytol. 2006, 28 (5): 281-291. [http://www.ncbi.nlm.nih.gov/pubmed/17067010]

    Google Scholar 

  15. Kayser K, Gortler J, Goldmann T, Vollmer E, Hufnagl P, Kayser G: Image standards in tissue-based diagnosis (Diagnostic Surgical Pathology). Diagn Pathol. 2008, 3: 17-10.1186/1746-1596-3-17. [http://www.diagnosticpathology.org/content/3/1/17]

    Article  PubMed Central  PubMed  Google Scholar 

  16. Gil J, Wu HS: Applications of image analysis to anatomic pathology: realities and promises. Cancer Investig. 2003, 21 (6): 950-959. 10.1081/CNV-120025097.

    Article  Google Scholar 

  17. Gil J, Wu H, Wang BY: Image analysis and morphometry in the diagnosis of breast cancer. Microsc Res Tech. 2002, 59 (2): 109-118. 10.1002/jemt.10182.

    Article  PubMed  Google Scholar 

  18. Klapper W, Hoster E, Determann O, Oschlies I, Laak J, Berger F, Bernd HW, Cabecadas J, Campo E, Cogliatti S, Hansmann M, Kluin P, Kodet R, Krivolapov Y, Loddenkemper C, Stein H, Muller P, Barth T, MÃller-Hermelink K, Rosenwald A, Ott G, Pileri S, Ralfkiaer E, Rymkiewicz G, Krieken J, Wacker H, Unterhalt M, Hiddemann W, Dreyling M: Ki-67 as a prognostic marker in mantle cell lymphoma-consensus guidelines of the pathology panel of the European MCL Network. J. Hematopathol. 2009, 2: 103-111. 10.1007/s12308-009-0036-x.

    Article  Google Scholar 

  19. Lehmussola A, Ruusuvuori P, Selinummi J, Huttunen H, Yli-Harja O: Computational framework for simulating fluorescence microscope images with cell populations. Med Imaging, IEEE Trans. 2007, 26 (7): 1010-1016.

    Article  Google Scholar 

  20. Lehmussola A, Ruusuvuori P, Selinummi J, Rajala T, Yli-Harja O: Synthetic images of high-throughput microscopy for validation of image analysis methods. Proc IEEE. 2008, 96 (8): 1348-1360.

    Article  Google Scholar 

  21. Kayser K, Radziszowski D, Bzdyl P, Sommer R, Kayser G: Towards an automated virtual slide screening: theoretical considerations and practical experiences of automated tissue-based virtual diagnosis to be implemented in the Internet. Diagn Pathol. 2006, 1: 1-8. 10.1186/1746-1596-1-10.

    Article  PubMed Central  PubMed  Google Scholar 

  22. Kayser G, Radziszowski D, Bzdyl P, Sommer R, Kayser K: Theory and implementation of an electronic, automated measurement system for images obtained from immunohistochemically stained slides. Anal Quant Cytol Histol. 2006, 28: 27-38. [http://pubget.com/paper/16566277]

    PubMed  Google Scholar 

  23. Markiewicz T: Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology. Diagn Pathol. 2011, 6: 1-7. 10.1186/1746-1596-6-S1-S18.

    Article  Google Scholar 

  24. Bueno G, Gonzalez R, Deniz O, Garcia-Rojo M, Gonzalez-Garcia J, Fernandez-Carrobles M, Vallez N, Salido J: A parallel solution for high resolution histological image analysis. Comput Methods Programs Biomed. 2012, 108: 388-401. 10.1016/j.cmpb.2012.03.007. [http://www.sciencedirect.com/science/article/pii/S016926071200082X]

    Article  CAS  PubMed  Google Scholar 

  25. Korzynska A, Iwanowski M: Segmentation of moving cells in bright field and Epi-Fluorescent microscopic image sequences. Computer Vision and Graphics, Volume 6374 of Lecture Notes in Computer Science. Edited by: Bolc L, Tadeusiewicz R, Chmielewski L, Wojciechowski K. 2010, Berlin Heidelberg: Springer, 401-410. 10.1007/978-3-642-15910-7_46.

    Google Scholar 

  26. Korzynska A, Hoppe A, Strojny W, Wertheim D: Investigation of a combined texture and contour method for segmentation of light microscopy cell images. Proceedings of the Second IASTED International Conference on Biomedical Engineering, Volume 417. Edited by: Tilg B. 2004, 234-239. [http://www.actapress.com/Abstract.aspx?paperId=16336]

    Google Scholar 

  27. Korzynska A, Iwanowski M: Multistage morphological segmentation of bright-field and fluorescent microscopy images. Opto-Electron Rev. 2012, 20 (2): 174-186. 10.2478/s11772-012-0026-x.

    Article  Google Scholar 

  28. Bartels P, Montironi R, Duval da Silva V, Hamilton P, Thompson D, Vaught L, Bartels H: Tissue architecture analysis in prostate cancer and its precursors: an innovative approach to computerized histometry. Eur Urol. 1999, 35 (5-6): 484-491. 10.1159/000019884.

    Article  CAS  PubMed  Google Scholar 

  29. Schulerud H, Kristensen GB, Liestol K, Vlatkovic L, Reith A, Albregtsen F, Danielsen HE: A review of caveats in statistical nuclear image analysis. Anal Cell Pathol. 1998, 16 (2): 63-82. [http://iospress.metapress.com/content/6159KHY5B2WPJN0Y]

    Article  CAS  PubMed  Google Scholar 

  30. Koprowski R, Wrobel Z: The cell structures segmentation. Computer Recognition Systems, Volume 30 of Advances in Soft Computing. Edited by: Kurzynski M, Puchala E, Wozniak M, Zolnierek A. 2005, Berlin Heidelberg: Springer, 569-576. 10.1007/3-540-32390-2_67.

    Chapter  Google Scholar 

  31. Korzynska A, Neuman U, Lopez C, Lejeun M, Bosch R: The method of immunohistochemical images standardization. Image Processing and Communications Challenges 2, Volume 84 of Advances in Intelligent and Soft Computing. Edited by: Choras R. 2010, Berlin Heidelberg: Springer, 213-221. 10.1007/978-3-642-16295-4_24.

    Google Scholar 

  32. Tadrous P: Digital stain separation for histological images. J Microsc. 2010, 240 (2): 164-172. 10.1111/j.1365-2818.2010.03390.x.

    Article  CAS  PubMed  Google Scholar 

  33. Ruifrok A, Johnston D: Quantification of histochemical staining by color deconvolution. Anal Quant Cytol Histol. 2001, 23 (4): 291-299. [http://europepmc.org/abstract/MED/11531144]

    CAS  PubMed  Google Scholar 

  34. Di Cataldo S, Ficarra E, Acquaviva A, Macii E: Automated segmentation of tissue images for computerized IHC analysis. Comput Methods Programs Biomed. 2010, 100: 1-15. 10.1016/j.cmpb.2010.02.002. [http://www.sciencedirect.com/science/article/pii/S0169260710000337]

    Article  CAS  PubMed  Google Scholar 

  35. Di Cataldo S, Ficarra E, Acquaviva A, Macii E: Achieving the way for automated segmentation of nuclei in cancer tissue images through morphology-based approach: A quantitative evaluation. Comput Med Imaging Graph. 2010, 34 (6): 453-461. 10.1016/j.compmedimag.2009.12.008. [http://www.sciencedirect.com/science/article/pii/S089561110900158X]

    Article  CAS  PubMed  Google Scholar 

  36. Kuse M, Sharma T, Gupta S: A classification scheme for lymphocyte segmentation in H&E stained histology images. Recognizing Patterns in Signals, Speech, Images and Videos, Volume 6388 of Lecture Notes in Computer Science. Edited by: Unay D, Cataltepe Z, Aksoy S. 2010, Berlin Heidelberg: Springer, 235-243. 10.1007/978-3-642-17711-8_24.

    Google Scholar 

  37. White JM, Rohrer GD: Image thresholding for optical character recognition and other applications requiring character image extraction. IBM J Res Dev. 1983, 27 (4): 400-411.

    Article  Google Scholar 

  38. Bernsen J: Dynamic thresholding of gray-level images. ICPR’86: International Conference on Pattern Recognition. 1986, [http://libra.msra.cn/Publication/2042781/dynamic-thresholding-of-gray-level-images]

    Google Scholar 

  39. Yasuda Y, Dubois M, Huang T: Data compression for check processing machines. Proc IEEE. 1980, 68 (7): 874-885.

    Article  Google Scholar 

  40. Neuman U, Korzynska A, Lopez C, Lejeune M: Segmentation of stained lymphoma tissue section images. Inform Technol Biomed, Volume 69 of Advances in Intelligent and Soft Computing. Edited by: Pietka E, Kawa J. 2010, Berlin Heidelberg: Springer, 101-113. 10.1007/978-3-642-13105-9_11.

    Google Scholar 

  41. Pham DL, Xu C, Prince JL: A survey of current methods in medical image segmentation. Annu Rev Biomed Eng. 2000, 2: 315-337. 10.1146/annurev.bioeng.2.1.315. [http://www.annualreviews.org/doi/abs/10.1146/annurev.bioeng.2.1.315] [PMID: 11701515]

    Article  CAS  PubMed  Google Scholar 

  42. Fu K, Mui J: A survey on image segmentation. Patt Recognit. 1981, 13: 3-16. 10.1016/0031-3203(81)90028-5. [http://www.sciencedirect.com/science/article/pii/0031320381900285]

    Article  Google Scholar 

  43. Sezgin M, Sankur B: Survey over image thresholding techniques and quantitative performance evaluation. J Electron Imaging. 2004, 13: 146-168. 10.1117/1.1631315.

    Article  Google Scholar 

  44. Niblack W: An Introduction to Image Processing. 1986, Englewood Cliffs: Prentice-Hall International, [http://books.google.pl/books?id=XOxRAAAAMAAJ]

    Google Scholar 

  45. Sauvola J, Pietikainen M: Adaptive document image binarization. Patt Recognit. 2000, 33 (2): 225-236. 10.1016/S0031-3203(99)00055-2. [http://www.sciencedirect.com/science/article/pii/S0031320399000552]

    Article  Google Scholar 

  46. Siva P, Hulls C: Dynamic segmentation of small image windows for visual servoing. Mechatronics and Automation, 2005 IEEE International Conference, Volume 2. 2005, Niagara Falls, Canada: IEEE Service Center, 643-648. 10.1109/ICMA.2005.1626625.

    Chapter  Google Scholar 

  47. Palumbo PW, Swaminathan P, Srihari SN: Document image binarization: evaluation of algorithms. Proc SPIE Appl Digit Image Process IX. 1986, 697 (278): 278-285. 10.1117/12.976229.

    Article  Google Scholar 

  48. Alvaro T, Lejeune M, Salvado MT, Lopez C, Jaen J, Bosch R, Pons LE: Immunohistochemical patterns of reactive microenvironment are associated with clinicobiologic behavior in follicular lymphoma patients. J Clin Oncolog. 2006, 24 (34): 5350-5357. 10.1200/JCO.2006.06.4766. [http://jco.ascopubs.org/content/24/34/5350.abstract]

    Article  Google Scholar 

  49. Nieniewski M: Morfologia matematyczna w przetwarzaniu obrazow. 1998, Problemy Wspolczesnej Nauki: Informatyka, Akademicka Ofic. Wydawnicza PLJ, [http://books.google.pl/books?id=vAk∖_twAACAAJ]

    Google Scholar 

  50. Iwanowski M: Metody morfologiczne w przetwarzaniu obrazow cyfrowych. 2009, Akademicka Oficyna Wydawnicza EXIT, [https://sites.google.com/site/metodymorfologiczne/]

    Google Scholar 

Download references

Acknowledgements

No outside funding was received for this study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lukasz Roszkowiak.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

AK suggested the idea of investigation (artificial image and adjusted adaptive threshold method), wrote part of the paper with results, discussion and conclusions and some figures. LR wrote part of paper and tables and figures and designed and implemented software for image analysis and processing and Blend-Altman analysis, performed image analysis and consulted the obtained results. LW supported software implementation. CL, RB, ML performed experiments and acquired images and consulted the obtained results. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Korzynska, A., Roszkowiak, L., Lopez, C. et al. Validation of various adaptive threshold methods of segmentation applied to follicular lymphoma digital images stained with 3,3’-Diaminobenzidine&Haematoxylin. Diagn Pathol 8, 48 (2013). https://doi.org/10.1186/1746-1596-8-48

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1746-1596-8-48

Keywords