GPU Accelerated Algorithm for Blood Detection inWireless Capsule Endoscopy Images

, Isabel N. Figueiredo , Carlos Graca2 and Gabriel Falcao2



(1)
CMUC, Department of Mathematics, Faculty of Science and Technology, University of Coimbra, Coimbra, Portugal

(2)
Instituto de Telecomunicações, Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Coimbra, Coimbra, Portugal

 



 

Isabel N. Figueiredo




Abstract

Wireless capsule endoscopy (WCE) has emerged as a powerful tool in the diagnosis of small intestine diseases. One of the main limiting factors is that it produces a huge number of images, whose analysis, to be done by a doctor, is an extremely time consuming process. Recently, we proposed (Figueiredo et al. An automatic blood detection algorithm for wireless capsule endoscopy images. In: Computational Vision and Medical Image Processing IV: VIPIMAGE 2013, pp. 237–241. Madeira Island, Funchal, Portugal (2013)) a computer-aided diagnosis system for blood detection in WCE images. While the algorithm in (Figueiredo et al. An automatic blood detection algorithm for wireless capsule endoscopy images. In: Computational Vision and Medical Image Processing IV: VIPIMAGE 2013, pp. 237–241. Madeira Island, Funchal, Portugal (2013)) is very promising in classifying the WCE images, it still does not serve the purpose of doing the analysis within a very less stipulated amount of time; however, the algorithm can indeed profit from a parallelized implementation. In the algorithm we identified two crucial steps, segmentation (for discarding non-informative regions in the image that can interfere with the blood detection) and the construction of an appropriate blood detector function, as being responsible for taking most of the global processing time. In this work, a suitable GPU-based (graphics processing unit) framework is proposed for speeding up the segmentation and blood detection execution times. Experiments show that the accelerated procedure is on average 50 times faster than the original one, and is able of processing 72 frames per second.



1 Introduction


Wireless capsule endoscopy (WCE), also called capsule endoscopy (CE), is a noninvasive endoscopic procedure which allows visualization of the small intestine, without sedation or anesthesia, which is difficult to reach by conventional endoscopies. As the name implies, capsule endoscopy makes use of a swallowable capsule that contains a miniature video camera, a light source, batteries, and a radio transmitter (see Fig. 1). This takes continual images during its passage down the small intestine. The images are transmitted to a recorder that is worn on a belt around the patient’s waist. The whole procedure lasts 8 h, after which the data recorder is removed and the images are stored on a computer so that physicians can review them and analyze the potential source of diseases. Capsule endoscopy is useful for detecting small intestine bleeding, polyps, inflammatory bowel disease (Crohn’s disease), ulcers, and tumors. It was first invented by Given Imaging in 2000 [12]. Since its approval by the FDA (U.S. Food and Drug Administration) in 2001, it has been widely used in hospitals.

Although capsule endoscopy demonstrates a great advantage over conventional examination procedures, some improvements remain to be done. One major issue with this new technology is that it generates approximately 56,000 images per examination for one patient, whose analysis is very time consuming. Furthermore, some abnormalities may be missed because of their size or distribution, due to visual fatigue. So, it is of great importance to design a real-time computerized method for the inspection of capsule endoscopic images. Given Imaging Ltd. has also developed the so called RAPID software for detecting abnormalities in CE images. But its sensitivity and specificity, respectively, were reported to be only 21.5 and 
$41.8\,\%$
[10], see also [19]. Recent years have witnessed some development on automatic inspection of CE images, see [1, 46, 7, 9, 14,15,18,20].



A329170_1_En_4_Fig1_HTML.gif


Fig. 1
a Image of the capsule. b Interior of the capsule

The main indication for capsule endoscopy is obscure digestive bleeding [5,9,14,18,20]. In fact, in most of these cases, the source of the bleeding is located in the small bowel. However, often, these bleeding regions are not imaged by the capsule endoscopy. This is why the blood detection is so important when we are dealing with capsule endoscopy. The current work is an extension of the paper [8], where an automatic blood detection algorithm for CE images was proposed. Utilizing Ohta color channel (R+G+B)/3 (where R, G and B denote the red, green and blue channel, respectively, of the input image), we employed analysis of eigenvalues of the image Hessian matrix and multiscale image analysis approach for designing a function to discriminate between blood and normal frames. The experiments show that the algorithm is very promising in distinguishing between blood and normal frames. But, the algorithm is not able to process huge number of images produced by WCE examination of a patient, within a very less stipulated amount of time. However, the computations of the algorithm can indeed be parallelized, and thus, can process the huge number of images within a very less stipulated amount of time. In the algorithm we identified two crucial steps, segmentation (for discarding non-informative regions in the image that can interfere with the blood detection) and the construction of an appropriate blood detector function, as being responsible for taking most of the global processing time. We propose a suitable GPU-based framework for speeding up the segmentation and blood detection execution times, and hence the global processing time. Experiments show that the accelerated procedure is on average 50 times faster than the original one, and is able of processing 72 frames per second.

This chapter is structured as follows. A choice of the suitable color channel is made in Sect. 2.1 and segmentation of informative regions is done in Sect. 2.2. A blood detector function is introduced in Sect. 2.3 . The outline of the the algorithm is given in Sect. 2.4 . Validation of the algorithm on our current data set is provided in Sect. 3 . The GPU procedure for speeding up the segmentation and blood detection is described in Sect. 4 . Finally, the chapter ends with some conclusions in Sect. 5.


2 Blood Detection Algorithm



Notation

Let Ω be an open subset of 
$R^2,$
representing the image (or pixel) domain. For any scalar, smooth enough, function u defined on 
$\Omega,$

$\|u\|_{L^1 (\Omega )}$
and 
$\|u\|_{L^\infty (\Omega )},$
respectively, denote the 
$L^1$
and 
$L^\infty$
norms of u.


2.1 Color Space Selection


Color of an image carries much more information than the gray levels. In many computer vision applications, the additional information provided by color can aid image analysis. The Ohta color space [17] is a linear transformation of the RGB color space. Its color channels are defined by 
$A_1=(R+G+B)/3,$

$A_2=R-B,$
and 
$A_3=(2G-R-B)/2.$
We observe that channel A 1 has the tendency of localizing quite well the blood regions, as is demonstrated in Fig. 3. The first row corresponds to the original WCE images with blood regions and the second row exhibits their respective A 1 channel images. We also observe that, before computing the A 1 channel of the images, we applied an automatic illumination correction scheme [22] to the original images, to reduce the effect of illumination.


2.2 Segmentation


Many WCE images contain uninformative regions such as bubbles, trash, dark regions and so on, which can interfere with the detection of blood. More information on uninformative regions can be found in [1]. We observe that the second component (which we call henceforth a-channel) of the CIE Lab color space has the tendency of separating these regions from the informative ones. More precisely, for better removal of the uninformative regions, we first decompose the a-channel into geometric and texture parts using the model described in [2, Sect. 2.3], and perform the two phase segmentation. This latter relies on a reformulation of the Chan and Vese variational model [2,3], over the geometric part of the a-channel.

The segmentation method is described as follows: We first compute the constants c 1 and c 2 (representing the averages of I in a two-region image partition). We then solve the following minimization problem



$$\underset{u,v}{\min} \Big \{ TV_g(u) +\frac{1}{2 \theta} \| u-v\|_{L^2(\Omega)}^2 + \int_\Omega \Big( \lambda\, r( I, c_1, c_2) \, v + \alpha\, \nu (v) \Big) \, dx\, dy \Big \}$$

(1)
where 
$TV_g(u) := \int_\Omega g(x,y) |\nabla u| \, dx\, dy$
is the total variation norm of the function u, weighted by a positive function 
$g;$

$r( I, c_1, c_2) (x,y) := \big (c_1 - I(x,y)\big )^2 - \big (c_2 - I(x,y)\big )^2$
is the fitting term, 
$\theta>0$
” src=”http://basicmedicalkey.com/wp-content/uploads/2017/06/A329170_1_En_4_Chapter_IEq14.gif”></SPAN> is a fixed small parameter, <SPAN id=IEq15 class=InlineEquation><IMG alt=0$ ” src=”http://basicmedicalkey.com/wp-content/uploads/2017/06/A329170_1_En_4_Chapter_IEq15.gif”> is a constant parameter weighting the fitting term, and 
$\alpha\,\nu (v)$
is a term resulting from a reformulation of the model as a convex unconstrained minimization problem (see [2, Theorem 3]). Here, u represents the two-phase segmentation and v is an auxiliary unknown. The segmentation curve, which divides the image into two disjoint parts, is a level set of u, 
$\{ (x,y)\in\Omega :u(x,y) = \mu\}$
, where in general 
$\mu=0.5$
(but μ can be any number between 0 and 1, without changing the segmentation result, because u is very close to a binary function).

The above minimization problem is solved by minimizing u and v separately, and iterated until convergence. In short we consider the following two steps:

1. v being fixed, we look for u that solves



$$\underset{u}{\min} \Big \{ TV_g(u) +\frac{1}{2 \theta} \| u-v\|_{L^2(\Omega)}^2 \Big \}.$$

(2)
2. u being fixed, we look for v that solves



$$\underset{v}{\min} \Big \{ \frac{1}{2 \theta} \| u-v\|_{L^2(\Omega)}^2 + \int_\Omega \Big( \lambda\, r( I, c_1, c_2) \, v + \alpha\, \nu (v) \Big) \, dx\, dy \Big \}.$$

(3)
It is shown that the solution of (2) is ([2, Proposition 3])



$$u=v-\theta \mbox{div} p,$$
where div represents the divergent operator, and 
$p=(p_1,p_2)$
solves



$$g\nabla(\theta \mbox{div} p- v)-|\nabla(\theta \mbox{div} p-v)|p=0.$$
The problem for p can be solved using the following fixed point method



$$p^0=0,~p^{n+1}=\frac{p^n+\delta t \nabla(\mbox{div} p^n-v/\theta)}{1+\frac{\delta t}{g}|\nabla(\mbox{div} p^n-v/\theta)|}.$$
Again from [2, Proposition 4], we have



$$v=\min\{\max\{u-\theta\lambda r( I, c_1, c_2),0\},1\}.$$

The segmentation results for some of the WCE images are shown in Fig. 2. The first row corresponds to the original images, the second row shows the segmentation masks, and the third row displays the segmentation curves superimposed on the original images.



A329170_1_En_4_Fig2_HTML.gif


Fig. 2
First row: Original image. Second row: Segmentation mask. Third row: Original image with segmentation curve superimposed

In these experiments (and also in the tests performed in Sect. 3) the values chosen for the parameters involved in the definition of (1), are those used in [2], with g the following edge indicator function 
$g(\nabla u)=\frac{1}{1+ \beta \|\nabla u\|^2}$
and 
$\beta=10^{-3}.$

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 14, 2017 | Posted by in GENERAL SURGERY | Comments Off on GPU Accelerated Algorithm for Blood Detection inWireless Capsule Endoscopy Images

Full access? Get Clinical Tree

Get Clinical Tree app for offline access