Você está na página 1de 26

CIMST 2013

September 2

Digital Image
Processing and Analysis

Simon F. Nrrelykke
ETH Zurich
Light Microscopy and Screening Center (LMSC)
Data Analysis Unit (DAU)

Electro-magnetic Spectrum

We are involved as long as there is a digital image

http://en.wikipedia.org/wiki/File:EM_Spectrum_Properties_edit.svg 2
Outline

Introduction
History and motivation
What is a digital image?
Color space
What is digital image processing?

Enhancement and Restoration (processing)


Intensity operations
power-law and piecewise linear transformations
histogram equalization and specification
Spatial operations
filtering (smoothing and sharpening)
affine
binary operations (erosion, dilation)
Feature extraction
Frequency Domain operations
filtering
deconvolution

Segmentation (processing)
Thresholding
Edge based
Morphological snakes
Putting it all together
tracking of amoebae (incl. instrument control)

Introduction
History

CCD (2009 nobel prize)


invented 1969 in AT&T Bell Labs
first commercial devices available in 1974 (100 x 100 pixels = 0.01 MegaPixels)
GFP (2008 nobel prize)
1992: the cloning and nucleotide sequence of wtGFP reported in Gene.
since then molecular biology hasnt been the same
ImageJ (formerly NIH image) is from 1997

Motivation

Make pretty pictures


publications, talks, websites, ...
Automate repetitive tasks
contrast adjustment, cropping, counting, ...
Get numbers out of pictures
cell sizes, vessel lengths, GPF expression level, ...
Make experiment possible
whole-genome screen (3 channels): two million images
Objectivity
or at least:
Reproducibility
this is receiving a lot of attention at the moment

6
Objectivity

All inner squares have the same intensity.


They appear darker as the background
becomes lighter

The Mach band effect: the perceived


intensity is not a simple function of actual
Some well-known optical illusions intensity
Adapted from Fig. 2.7, 2.8, and 2.9 in Digital Image Processing, 3rd Edition 7

digital Image

a)

b) c)

Image shown as: a) surface plot, b)


visual intensity array, c) 2D numerical

Operations are done on each color


channel individually, as if it was a
gray-scale image
Adapted from Fig. 2.15, 2.18, and 2.38 in Digital Image Processing, 3rd Edition 8
Quantization
continuous image analog scan

Continuous image projected onto


sensor array. sampled scan quantized sample
Result of spatial sampling and
intensity quantization is visible.

One-dimensional example of the effect of spatial sampling and intensity quantization.


Spatial sampling is influenced by the image-magnification and pixel-density.
The intensity quantization is determined by the camera (8bit, 12bit, 16bit, ...), but is
influenced by user-adjustments of entire imaging system.

Adapted from Figs. 2.16 and 2.17 in Digital Image Processing, 3rd Edition 9

Aliasing (Spatial)
Original, 402 x 566 pix Blurred (sigma = 1) 201 x 286 pix
Sample densely enough that
smallest feature of interest can be
resolved.
Be careful when resampling/
scaling an image

Aliasing refers to an effect that causes different


signals to become indistinguishable (or aliases
of one another) when sampled.
It also refers to the distortion or artifact that
results when the signal reconstructed from
samples is different from the original continuous
signal. Resampled, 201 x 283 pix Resample of blurred 201 x 286 pix
http://en.wikipedia.org/wiki/Aliasing

A graph showing aliasing of an f=0.9 sine wave by an f=0.1 sine wave


by sampling at a period of T=1.0

Adapted from Fig. 4.17 in Digital Image Processing, 3rd Edition 10


Quantization (intensity)

256 128 16 8

64 32 4 2

Number of gray-scale values and the effect.


256 = 28 values: an 8-bit image.
False contours visible from ~32.
Two levels correspond to a thresholded image

Adapted from Fig. 2.21 in Digital Image Processing, 3rd Edition 11

Image Dimensions
BIOLOGICAL IMAGE ANALYSIS PRIMER 7

Figure 1.1 Images


We will focus on the spatial 2D case
viewed as matrices. The overview is not meant to be exhaustive but reflects
some of the more frequently used modes of image acquisition in biological and medical imaging,
it is the
where the number of dimensions most common
is typically one to five, with each dimension corresponding to an
independent physical parameter: three (usually denoted x, y, z) to space, one (usually denoted t)
3D is usually a
to time, and one to wavelength, or color, or morez-stack of 2D
generally to any spectral parameter (we denote
movies are 2D time-lapse functions, I ( x, y, z, t, s), with each set of
this dimension s here). In other words, images are discrete
coordinates yielding the value of a unique sample (indicated by the small squares, the number of
which is obviously arbitrary here). Note that the dimensionality of an image (indicated in the top
row) is given by the number of coordinates that are varied during acquisition. To avoid confusion
Meijering and van Cappellen. Biological Image Analysis
in characterizing anPrimer.
image,(2006)
it is advisable to add adjectives indicating which dimensions were 12
scanned, rather than mentioning just dimensionality. For example, a 4D image may either be a
spatially 2D multispectral time-lapse image, or a spatially 3D time-lapse image.
Color space

Most color cameras operate with RGB


RGB is inspired by the human eye
RGB: cartesian coordinates
HSV (hue, saturation, lightness): cylindrical coordinates

Other color spaces exist


CMYG: cyan, magenta, yellow, green; for printers
L*a*b
sRGB
CIEXYZ
...

Color images can be decomposed into a


number of gray-scale images
We will proceed in the gray-scale

13

Images are just numbers

So image analysis is just math


linear algebra, statistics, topology, calculus, partial differential equations, functional analysis, ...
and art, or at least a sense of beauty
We can add, subtract, multiply, and divide images with each other
We can perform logical operations on images
e.g. find all pixel values less than 17 and change them to zero; set the rest to 255 (thresholding)
We can perform local and global mathematical operations on an image
e.g. replace each pixel by the average of itself and its neighbors (smoothing, linear operation)
or, replace each pixel by the median of itself and its neighbors (de-noising, non-linear operator)
or, replace each pixel by the value of the local spatial derivative (edge detection)
We can transform images to another space, operate there, then transform back
e.g. Fourier Transform two images to k-space, multiply, inverse transform (gives the correlation/convolution).
Can speed up processing orders of magnitude. See the convolution theorem http://en.wikipedia.org/wiki/
Convolution_theorem.
Think of this as flying: You enter the airplane (transform), translate at ~1000 km/h (operate), then exit the
airplane at/near destination (inverse transform). Could have been done on foot or by car, but if distance is great,
then flying is faster.

14
processing vs. analysis
8 BIOLOGICAL IMAGE ANALYSIS PRIMER

Figure 1.2 Illustration of the meaning of commonly used terms. The process of digital image
formation in microscopy is described in other books. Image processing takes an image as input
Meijering and van Cappellen. Biological
andImage Analysis
produces Primer.
a modified (2006)
version of it (in the case shown, the object contours are enhanced using 15
an operation known as edge detection, described in more detail elsewhere in this booklet). Image
analysis concerns the extraction of object features from an image. In some sense, computer graph-
ics is the inverse of image analysis: it produces an image from given primitives, which could be
numbers (the case shown), or parameterized shapes, or mathematical functions. Computer vision
aims at producing a high-level interpretation of what is contained in an image. This is also known
as image understanding. Finally, the aim of visualization is to transform higher-dimensional im-
age data into a more primitive representation to facilitate exploring the data.

surveillance, forensics, military defense, vehicle guidance, document process-


ing, weather prediction, quality inspection in automated manufacturing pro-
cesses, et cetera. Given this enormous success, one might think that computers
will soon be ready to take over most human vision tasks, also in biological in-

Enhance and restore


Intensity Operations

dark
Enhance contrast by
Proper acquisition (junk in => junk out)
Use the entire dynamic range of the camera
Inspect histogram of test-images
Adjust microscope (filters, iris, illumination strength,
light
shutter time, ...)
Adjust camera (exposure time)
specification of look-up table (LUT)
after acquisition is done
typically a piecewise linear LUT is specified
when choice of LUT is based on histogram of low
pixel values:
histogram equalization, if flat histogram output
histogram specification, any shape histogram

high

Adapted from Digital Image Processing, 3rd Edition


17

Intensity Transformation
BIOLOGICAL IMAGE ANALYSIS PRIMER 11

Originals

LUTs

Results

Figure 2.1 Examples of intensity transformations based on a global mapping function: contrast
stretching, intensity inversion, intensity thresholding, and histogram equalization. The top row
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) 18
shows the images used as input for each of the transformations. The second row shows for each
image the mapping function used (denoted M), with the histogram of the input image shown in
the background. The bottom row shows for each input image the corresponding output image
resulting from applying the mapping function: O( x, y) = M ( I ( x, y)). It is clear from the mapping
Histogram equalization
From probability theory we know a change of variable formula,
used for finding one probability-density-function from another:

r: input image, pr(r) = nr/N


ps(s) = pr(r) |dr/ds| s: output image, ps(s) = ns/N

This is what we use in histogram equalization (specification is similar):

sk = (L-1)/N kj=0 nj

Adapted from Fig. 3.19 in Digital Image Processing, 3rd Edition 19

Local intensity operations


Local histogram equalization in
Original Global histogram equalization 3 x 3 neighborhood

Left: features almost completely hidden inside squares


Middle: global histogram equalization reveals noise and hint of shapes.
Right: local histogram equalization reveals shapes inside squares

Adapted from Fig. 3.26 in Digital Image Processing, 3rd Edition 20


Convolution
BIOLOGICAL IMAGE ANALYSIS PRIMER 13

Convolution is achieved by replacing each


pixel by a weighted average of the local
neighborhood.

Depending on the weights, the resulting


image will be smoothed, sharpened,
show edges, ...

In real life the convolution is done in the


Fourier domain to speed up the process
(often dramatically)

Figure 2.2 Principles and examples of convolution filtering. The value of an output pixel is com-
puted as a linear combination (weighing and summation) of the value of the corresponding input
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) pixel and of its neighbors. The weight factor assigned to each input pixel is given by the convo- 21
lution kernel (denoted K). In principle, kernels can be of any size. Examples of commonly used
kernels of size 3 3 pixels include the averaging filter, the sharpening filter, and the Sobel x- or
y-derivative filters. The Gaussian filter is often used as a smoothing filter. It has a free parameter
(standard deviation s) which determines the size of the kernel (usually cut off at m = 3s) and
therefore the degree of smoothing. The derivatives of this kernel are often used to compute image
derivatives at different scales, as for example in Canny edge detection. The scale parameter, s,
should be chosen such that the resulting kernel matches the structures to be filtered.

Binary Image operations


14 BIOLOGICAL IMAGE ANALYSIS PRIMER

After thresholding, operate on the binary


image, X, by dilation, erosion, etc.

Done by centering a structuring


element, S, on each pixel and
performing a logical query.

Opening = erosion, then dilation


Closing = dilation, then erosion

Many of these operations can also be


done on gray-scale images.

Figure 2.3 Principles and examples of binary morphological filtering. An object in the image is
described as the set (denoted X) of all coordinates of pixels belonging to that object. Morphologi-
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) cal filters process this set using a second set, known as the structuring element (denoted S). Here
22
the discussion is limited to structuring elements that are symmetrical with respect to their center
element, s = (0, 0), indicated by the dot. In that case, the dilation of X is defined as the set of all
coordinates x for which the cross section of S placed at x (denoted Sx ) with X is not empty, and the
erosion of X as the set of all x for which Sx is a subset of X. A dilation followed by an erosion (or
vice versa) is called a closing (versus opening). These operations are named after the effects they
Restoration BIOLOGICAL IMAGE ANALYSIS PRIMER 17

vignetting

many ways to do this

in Fourier domain

Meijering and van Cappellen. Biological Image Analysis Primer. (2006) 23


Figure 2.5 Examples of the effects of image restoration operations: background subtraction, noise
reduction, and deconvolution. Intensity gradients may be removed by subtracting a background
image. In some cases, this image may be obtained from the raw image itself by mathematically
fitting a polynomial surface function through the intensities at selected points (indicated by the
squares) corresponding to the background. Several filtering methods exist to reduce noise. Gaus-
sian filtering blurs not only noise but all image structures. Median filtering is somewhat better at
retaining object edges but has the tendency to eliminate very small objects (compare the circles
in each image). Needless to say, the magnitude of these effects depends on the filter size. Non-

Geometrical Transformation
linear diffusion filtering was designed specifically to preserve object edges while reducing noise.
Finally, deconvolution methods aim to undo the blurring effects of the microscope optics and to
restore small details. More sophisticated methods are also capable of reducing noise.

16 BIOLOGICAL IMAGE ANALYSIS PRIMER

Coordinate transformations, T, are


rigid (translation and rotation)
affine (rigid + scaling and skewing)
curved (affine + nonlinear deformation)
obtained by polynomial functions
Image resampling involves
computation of output image, O, pixel values
by interpolation of input image, I, pixel values
via inverse transform T-1 from pixel positions in O

Basically, for each pixel in the output:


ask where that position came from (in the original) and
what the value was there (find by interpolation)
Figure 2.4 Geometrical image transformation by coordinate transformation and image resam-
pling. The former is concerned with how input pixel positions are mapped to output pixel posi-
tions. Many types of transformations (denoted T) exist. The most frequently used types are (in
increasing order of complexity) rigid transformations (translations and rotations), affine transfor-
mations (rigid transformations plus scalings and skewings), and curved transformations (affine
transformations plus certain nonlinear or elastic deformations). These are defined (or can be ap-
proximated) by polynomial functions (with degree n depending on the complexity of the trans-
formation). Image resampling concerns the computation of the pixel values of the output image
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) (denoted O) from the pixel values of the input image (denoted I). This is done by using the in- 24
verse transformation (denoted T 1 ) to map output grid positions ( x 0 , y0 ) to input positions ( x, y).
The value at this point is then computed by interpolation from the values at neighboring grid
positions, using a weighing function, also known as the interpolation kernel (denoted K).
Affine Transformations
Affine transformations are simple matrix multiplications

[x y 1] = [v w 1] T

Adapted from Table 2.2 in Digital Image Processing, 3rd Edition 25

Frequency Domain
Fourier Transformations

Why do it?
Makes things easier or faster

What is it and how is it done?


Take a course, or
study the wiki-page for 1 month to get a solid grasp, or
accept that it is mathematically sound although it looks
like magic/nonsense

1-dim FT

1-dim inverse FT

d-dim FT

http://en.wikipedia.org/wiki/Fourier_transform 26
Frequency-filtering
(a) (b)

(c) (d)

Example of image restoration by filtering in frequency-space


(Fourier domain). Butterworth band-reject filter used in (c).
In this case, operating in frequency space was both faster and better
than filtering in real-space.
Adapted from Fig. 2.40 in Digital Image Processing, 3rd Edition 27

Deconvolution
Any image is the convolution of the point-spread-function (PSF) of
the entire imaging system with the object, plus noise with camera-
dependent statistics.

Convolution is a mathematically invertible operation = deconvolution.


Addition of noise makes the problem much harder.

Example of deconvolution (right) of a macrophage

http://www.svi.nl/HuygensDeconvolution 28
Segmentation

Segmentation is the division of an image into discrete regions.


It can be achieved by an ever-expanding list of methods.
Thresholding
Edge-based
Active contours
Water-shed
k-means clustering
...

Thresholding

Original, 8bit grayscale Thresholded @ 104 (Otsu) Segmented


Embryos: Fiji example

Segmentation here consisted in keeping (connected) black


objects larger than 50 pixels2.
Image and operations from Fiji/ImageJ.
30
Edge detection

Edges are found by calculating the


derivatives of the image
first order: detects slopes
second order: detects changes in slopes
Differentiation enhances noise
Smooth the image first
Sobel: 1st order
Canny: 1st order, smooth & detect

Sobel

31

Edge Detection

Edges from original Thresholded edges (Otsu) Segmented binary

Original

Segmentation, as before, consisted in keeping (connected)


black objects larger than 50 pixels2.
Image and operations from Fiji/ImageJ.

32
Active contours

Minimize an energy associated to the


current contour as a sum of an internal and
external energy:
The external energy is supposed to be minimal when
the snake is at the object boundary position.
The most straightforward approach consists in giving
low values when the regularized gradient around the
contour position reaches its peak value.
The internal energy is supposed to be
minimal when the snake has a shape which
is supposed to be relevant considering the
shape of the sought object
http://demo.ipol.im/demo/
abmh_real_time_morphological_snakes_alg
orithm/
parameters: sigma = 20; dark center lines; contraction
balloon force
Alvarez et al. AReal Time Morphological Snakes Algorithm. Image Processing On Line (2012) pp. 1-6 33

In the implementation, = 1. For fixed c1 and c2 , gradient descent with respect to ' is
8 h i

Chan-Vese segmentation
> @' r'
>
< = (') div 1 (f c1 )2 + 2 (f c2 )2 in ,
@t |r'|
(14)
>
> (') @' = 0 on @,
:
|r'| @~n

where ~n is the outward normal on the image boundary.

0.3

0.2
(t)
0.1

0
4 3 2 1 0 1 2 3 4
t
Regularized delta (t) with = 1.
Chan-Vese Segmentation
For the initialization, an eective choice is the function

http://demo.ipol.im/demo/
'(x) = sin( 5 x1 ) sin( 5 y) (15)
g_chan_vese_segmentation/
which defines the initial segmentation as a checkerboard shape. This initialization has been observed
to have fast convergence. When using this initialization with = 0, the meaning of the results
"Active Contours Without Edges" method
inside vs. outside are arbitrary.
Alternatively, one may specify a contour C for the initialization. A level set representation of C
by Chan and Vese
can be constructed as (
+1 where x is inside C,
'(x) = (16)
Optimally fits a two-phase piecewise
1 where x is outside C.

3
constant model to the given image
Numerical Implementation
Here we describe the semi-implicit gradient descent for solving the ChanVese minimization as de-
veloped in the original work [7, 9]. This is not the only way to solve the problem, for example, He
and Osher [14] introduced an algorithm based on the topological derivative, Badshah and Chen [16]
developed a multigrid method, and Zehiry, Xu, and Sahoo [15] and Bae and Tai [17] proposed fast
algorithms based on graph cuts.
For the numerical implementation, suppose that f is sampled on a regular grid = {0, . . . , M }
{0, . . . , M }. The evolution of ' is discretized in space according to
"
r+ r+
@'i,j x 'i,j y 'i,j
= ('i,j ) rx q + ry q
@t 2 + (rx+ 'i,j )2 + (ry0 'i,j )2 2 + (rx0 'i,j )2 + (ry+ 'i,j )2
# (17)
1 (fi,j c1 ) 2 + 2 (fi,j c2 ) 2 , i, j = 1, . . . , M 1,

where r+
denotes forward dierence in the x dimension, rx denotes backward dierence, and r0x :=
x
(r+
x + rx )/2 is central dierence, and similarly in the y dimension. The curvature is regularized by

Getreuer. ChanVese Segmentation. Image Processing On Line (2012) 34


Neuron Tracing
22 BIOLOGICAL IMAGE ANALYSIS PRIMER

Figure 3.2 Tracing of neurite outgrowth using interactive segmentation methods. To reduce
background intensity gradients (shading effects) or discontinuities (due to the stitching of scans
NeuronJ: plugin for ImageJ
with different background levels), the image features exploited here are the second-order deriva-
tives, obtained by convolution with the second-order Gaussian derivative kernels (Figure 2.2) at
a proper scale (to suppress noise). These constitute a so-called Hessian matrix at every pixel in
the image.
Meijering and van Cappellen. Biological Its eigenvalues
Image and eigenvectors
Analysis Primer. (2006) are used to construct an ellipse (as indicated), whose 35
size is representative of the local neurite contrast and whose orientation corresponds to the local
neurite orientation. In turn, these properties are used to compute a cost image (with dark values
indicating a lower cost and bright values a higher cost) and vector field (not shown), which to-
gether guide a search algorithm that finds the paths of minimum cumulative cost between a start
point and all other points in the image. By using graphics routines, the path to the current cursor
position (indicated by the cross) is shown at interactive speed while the user selects the optimal
path based on visual judgment. Once tracing is finished, neurite lengths and statistics can be
computed automatically. This is the underlying principle of the NeuronJ tracing tool, freely avail-

Feature extraction
able as a plugin to the ImageJ program (discussed in the final chapter). The FilamentTracer tool,
commercially available as part of the Imaris software package, uses similar principles for tracing
in 3D images, based on volume visualization.

active segmentation methods. An example is live-wire segmentation, which


was originally designed to perform computer supported delineation of object
edges.6, 25 It is based on a search algorithm that finds a path from a single user-
selected pixel to all other pixels in the image by minimizing the cumulative

After segmentation, measure features


area, circumference, ellipticity, ...
orientation, pose,
intensity (mean, std, max, min)
Haralick features (texture)
Use features to answer specific question, or
input features to machine learning algorithm to
sort objects into different categories

This is the analysis part (returns numbers)


allows compact summary of relevant aspects of the images analyzed
quantified features can be analyzed statistically

36
www.Ilastik.org

What is ilastik?
Interactive Learning and Segmentation Toolkit
first released in 2011 by scientists at University of Heidelberg
user-friendly tool for image classification and segmentation in
up to three spatial and one spectral dimension
allows user to annotate an arbitrary number of classes in images
with a mouse interface
features includes color, edge and texture descriptors
advanced users can add their own problem-specific features
trains a random forest classifier
bases its predictions on local (not global) properties
used in CellProfiler (image-based screening)

37

www.Ilastik.org
Initial labels

nucleus
cytoplasm
background
weird stu

38
www.Ilastik.org
Interactive prediction

nucleus
cytoplasm
background
weird stu

39

www.Ilastik.org
Refinement of labels

nucleus
cytoplasm
background
weird stu

40
www.Ilastik.org
Updated result

nucleus
cytoplasm
background
weird stu

41

putting it all together


Tracking
24 BIOLOGICAL IMAGE ANALYSIS PRIMER

Tracking of objects
particles (point objects)
cells (extended, deformable objects)
has two separate steps
detection of objects in each frame
linking of objects between frame
Detection can use
external knowledge (point spread function)
constraints (mechanistic model for shape)
information from previous slide
Linking can use
only the past, to predict future (on the fly)
all pre-recorded data (off-line processing)

Figure 3.3 Challenges in particle and cell tracking. Regarding particle tracking, currently one
of the best approaches to detection of fluorescent tags is by least-squares fitting of a model of
the intensity distribution to the image data. Because the tags are subresolution particles, they
appear as diffraction-limited spots in the images and therefore can be modeled well by a mixture
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) of Gaussian functions, each with its own amplitude scaling factor, standard deviation, and center
43
position. Usually the detection is done separately for each time step, resulting in a list of potential
particle positions and corresponding features, to be linked between time steps. The linking is
hampered by the fact that the number of detected particles may be different for each time step. In
cell tracking, a contour model (surface model in the case of 3D time-lapse experiments) is often
used for segmentation. Commonly used models consist of control points, which are interpolated
using smooth basis functions (typically B-splines) to form continuous, closed curves. The model
must be flexible enough to handle geometrical as well as topological shape changes (cell division).

Same Cell without/with Food


The fitting is done by (constrained) movement of the control points to minimize some predefined
energy functional computed from image-dependent information (intensity distributions inside
and outside the curve) as well as image-independent information (a priori knowledge about cell
shape and dynamics). Finally, trajectories can be visualized by representing them as tubes (seg-
ments) and spheres (time points) and using surface rendering.

300x real time


(1s = 5min)

163m x 122m;
100min total

not looping
Simon F. Nrrelykke

44
Further

Further reading

Digital Image Processing, 3rd edition


by Gonzales and Woods (2008)
~950 pages, 66 EUR (amazon.de)
Website for the book:
http://www.imageprocessingplace.com/DIP-3E/
dip3e_main_page.htm
all images and figures from the book
solutions to selected problemsets
Assessment:
Thorough
Not bio-image specific Biological Image Analysis Primer

Does not assume any/much math knowledge Erik Meijering and Gert van Cappellen#

Biomedical Imaging Group


#Applied Optical Imaging Center

Erasmus MC University Medical Center

Biological Image Analysis Primer


Rotterdam, the Netherlands

Meijering and van Cappellen (2006)


37 pages, free pdf:
http://www.imagescience.org/meijering/
publications/download/biap2006.pdf

46
Web-Resources

Digital Image Processing (the book):


http://www.imageprocessingplace.com
image databases
solutions to problem sets from the book
Interactive tools to learn about image processing:
http://learn.hamamatsu.com/tutorials/flash/imageprocessing
Everything light-microscopy related:
http://www.olympusmicro.com
Basic Properties of Digital Images:
http://learn.hamamatsu.com/articles/digitalimagebasics.html
Free online courses:
Coursera: www.coursera.org
Image and video processing: From Mars to Hollywood with a stop at the hospital
Computer Vision: From 3D Reconstruction to Visual Recognition
Fundamentals of Digital Image and Video Processing
Udacity: https://www.udacity.com
EdX: https://www.edx.org

47

ICY

An open community platform for


bioimage informatics
http://icy.bioimageanalysis.org
Practical on Thursday 12th
Flying in two ICY developers from Paris

48
Extra slides

volume rendering
BIOLOGICAL IMAGE ANALYSIS PRIMER 27

Figure 4.1 Visualization of volumetric image data using volume rendering and surface rendering
methods. Volume rendering methods do not require an explicit geometrical representation of
the objects of interest present in the data. A commonly used volume rendering method is ray
casting: for each pixel in the view image, a ray is cast into the data, and the intensity profile
along the ray is fed to a ray function, which determines the output value, such as the maximum,
average, or minimum intensity, or accumulated opacity (derived from intensity or gradient
magnitude information). By contrast, surface rendering methods require a segmentation of the
objects (usually obtained by thresholding), from which a surface representation (triangulation)
Meijering and van Cappellen. Biological Image Analysis
is derived,Primer.
allowing (2006)
very fast rendering by graphics hardware. To reduce the effects of noise,
Gaussian smoothing is often applied as a preprocessing step prior to segmentation. As shown,
both operations have a substantial influence on the final result: by slightly changing the degree
of smoothing or the threshold level, objects may appear (dis)connected while in fact they are not.
Therefore it is recommended to establish optimal parameter values for both steps while inspecting
the effects on the original image data rather than looking directly at the renderings.
Co-localization
20 BIOLOGICAL IMAGE ANALYSIS PRIMER

A statistical analysis of spatial colocalization


using Ripley's K function.
Lagache et al. (ISBI 2013)
Implemented as plugin in ICY

Figure 3.1 See description on page 21.

bers: the so-called colocalization measures (Figure 3.1). Pearsons correlation


coefficient is often used for this purpose but may produce negative values,
which is counterintuitive for a measure expressing the degree of overlap. A
more intuitive measure, ranging from 0 (no colocalization) to 1 (full colocal-
Meijering and van Cappellen. Biological Image Analysis Primer. (2006) ization), is the so-called overlap coefficient , but it is appropriate only when 51
the number of fluorescent targets is more or less equal in each channel. If this
is not the case, multiple coefficients (two in the case of dual-color imaging)
are required to quantify the degree of colocalization in a meaningful way.46
These, however, tend to be rather sensitive to background offsets and noise,

Line-Segment Detector

http://demo.ipol.im/demo/
gjmr_line_segment_detector/
find parameters

52

Você também pode gostar