Факультет
прикладной математики
прикладной математики
прикладной математики
прикладной математики –
–
–
–
процессов управления
процессов управления
процессов управления
процессов управления
призван
увеличить знание
математики и
компьютерных технологий
во всех областях
человеческой
деятельности!
4. Технологии
программирования
Andriyashin* A., Miyata** K., Parkkinen* J.P.
*University of Joensuu, Finland
**National Museum of Japanese History, Finland
Spectral images archive compression
1
Introduction. Many applications seem to require spectral features.
For example in digital museums for viewing paintings under different
illumination, a spectral image of the painting and spectral characteristics
of illumination are needed. Regular RGB-color does not carry enough
information for this. Therefore, spectral image processing is becoming an
increasingly important research area. The objectives of current research
are to:
• Propose lossy compression approach.
• Introduce a new clustering technique.
• Compare convenient and new clustering.
The paper is organized as follows. First, each compression step
is proposed. Second, the implemented methods and test material are
described, and then the test results are shown. Finally, a discussion of
the results is given and conclusions are drawn.
Color spaces. We use two color spaces in this study: spectral color
space to have accurate color information of spectral images, and as a
reference, the CIE L
∗
a
∗
b
∗
space, as a standard device independent color
space, commonly used in color reproduction [1]. Spectral color space
characterizes space of light reflectance. In this space, color is presented
by the reflectance spectrum of an object.
Spectral imaging has been proposed by several authors [2]. In
spectra imaging, the image I(x, y, λ) contains the complete spectral
color information of a scene. The spectral representation is independent
both on the illuminant and on the observer. Process of spectral image
acquisition is represented in figure bellow.
1
This study was supported by a research grant from the Color Research Group:
University of Joensuu
249
In this study we are interested in storing color images of paintings into
an image database to from a digital image archive for museum purposes.
This requires a compression.
Compression technique. Several color image methods for spectral
image compression have been proposed recently. Both lossy [3] and
lossless [4] compression techniques have been suggested. One of commonly
used techniques for spectra reduction is Principal Component Analysis
(PCA) applied on the spectral direction. This basic compression method
has been used as reference method in this study. PCA can express
the spectral data using a lower dimensioned subspace than the original
spectral dimension [5].
PCA transforms the spectral representation into another, presumably
smaller, spectral base while preserving as much of the information
contained in the original image as possible. As the result of reduction is
a new spectral base and matrix of multipliers for the base. The reverse
operation of the reduction can be done by multiplying the inverse of the
reduced spectrum base with the multipliers.
Measure of compression to represent ratio between sizes of the
original and compressed image is Compression Ratio (CR).
CompressionRation =
Size(I
o
(x, y, λ))
Size(I
c
(x, y, λ))
Size(I
o
(x, y, λ)) is size of the original image and Size(I
c
(x, y, λ)) is size
of the compressed image. CR is the highest with only one principal
component, but the image is grey level image, and the color information
is totally lost.
One of the ways to improve the quality of color in compression by
PCA is to introduce clustering of pixels prior to spectral compression [6].
Usually images contain areas that consist of similar colors and therefore,
one spectrum can represent the entire area instead of a group of original
spectra. To reduce this kind of redundant information in the image, pixels
can be grouped in numbered clusters. Thus, the stored compressed image
consists only of the cluster table and pixel values that are indices to the
cluster table. K-means is one of the simplest clustering methods [7]. It is
250
an iterative method, where each cluster is represented by the centroid.
To reduce the transformation error, PCA requires that the set
of transforming patterns should be found near or in a subspace. To
recognize the most suitable sets of image, the subspace clustering method
was used [8]. Method is based on the definition of a vector projection
to a subspace and clustering of each sample to the subspace, where the
projection is longest.
Subspace clustering finds clusters as elements distributed along cones
in the spectral space. Although tip of cone can only go through the
point of origin, it was hypothesized to decrease the transformation losses.
Figure below represents an example of k-means and subspace clustering
applied in the same data.
Fig. 1. Example of k-means clustering result (a) and Subspace clustering result (b)
Given samples {λ
j
}
N
j=1
in R
n
which are presumably grouped along
N
c
centers subspaces M
1
, . . . , M
N
c
). Algorithm calculates the centers
iteratively. The main criterion that vector λ
j
belong to class C
i
is
the maximum of absolute value of projection λ
ji
to the subspace M
i
comparing with projections to the other subspaces.
Dimension of PCA transformation is the minimal (is equal to one)
251
to increase compression ratio maximally. Clustering method is aimed to
find such cluster that elements of it are the most suitable to be projected
to the one dimension space (i.e. line). The absolute value of projection
λ
ji
of vector λ
j
to the line (based on vector
−
→
M
i
) is equal to scalar
product of vector λ
j
and basis vector of line divided it to absolute value
of line basis vector [9]:
λ
ji
=
λ
j
∗
−
→
M
i
−
→
M
i
(1)
Test material and results. A goal in compression is to save color
information of painting as completely as possible. Root Mean Square
Error (RM SE) was chosen as one of the compression threshold. The
CIE L
∗
a
∗
b
∗
model is a common color space, defined in 1976. It is based
on the trichromatic human perception of color. Сolor difference or ∆E
in CIE L
∗
a
∗
b
∗
uniform color space was defined [10], to evaluate a color
reproduction.
The compression methods used in this study can be realized into the
following categories:
1. Compression based on K-means clustering in L
∗
a
∗
b
∗
and Spectral
spaces;
2. Compression based on Subspace clustering in L
∗
a
∗
b
∗
and Spectral
spaces;
The compression methods were tested with nine old orthodox icons. That
were measured by spectral imaging system [11]. The spectra were taken
on the range 380 nm -– 780 nm by 5 nm resolution. This means that
each image was represented by 81 spectral components. The values in all
components were normalized into [0, 1] and coded by 8 bits.
The error measure for the decompressed image is based on the
pixelwise reconstruction accuracy. For each method, the CR, ∆E and
RM SE were calculated.
Diagrams in Figure below represent average result over all tested
images with different parameters, spaces and methods of clustering. To
have comparable results, the compression ratio for all experiments was
set equal to about 71.
252
Fig. 2. Relationship between average over all images (a) RM SE, (b) ∆E and
number of clusters
Discussion. This paper hypothesized the most suitable technique
for an application in a historical archive system. For more quantitative
evaluation, compression technique was tested with different parameters.
As results of experiments, one can see that relationships between the
number of clustering and ∆E (RM SE) are negative at the same time
when CR is approximately the same. Also, we can see that compression
based on Subspace clustering applied in Spectral space represents the
lowest RM SE at each number of clustering. And for eight icons of ten
it represent the lowest ∆E at each number of clustering in L
∗
a
∗
b
∗
.
References
1. Plataniotis K.N., Venetsanopoulos A.N. Colour Image Processing and
Applications / Ed. by Springer–Verlag, 2000.
2. Chang C.I. Hyperspectral Imaging: Techniques for Spectral Detection
and Classification / Ed. by Kluwer. New York: Academic/Plenum
Publishers, 2003.
3. Abousleman G.P., Marcellin M.W., Hunt B.R. Hyperspectral
image compression using entropy-constrained predictive trellis coded
quantization // IEEE Trans. Image Proc., 1997. V. 6, № 4. P. 566—
573.
4. Memon N.D., Sayood K., Magliveras S.S. Lossless compression of
multispectral image data // IEEE Trans. Geosci. Remote Sensing,
1994. V. 32. P. 282—289.
253
5. Aderson T.W. An Introduction to Multivariate Statistical Analysis /
Ed. by Wiley&Sons. New-York, 1958.
6. Kaarna A., Zemcik P., Kalviainen H., Parkkinen J.P.S. Compression
of multispectral remote sensing images using clustering and spectral
reduction // IEEE Trans. Geosci. Remote Sensing, 2000. V. 38, № 2.
P. 1073–1082.
7. Duda R.O., Hart P.E., Stork D.G. Pattern Classification. 2nd edition
/ Ed. by Wiley&Sons. New-York, 2001.
8. Parkkinen J. P. S., Oja E. On subspace clustering // Proc. 7th Int.
Conf. on Pattern Recognition. Montreal, Canada, 1984. P. 692–695.
9. Kwak J.H., Hong S. Linear Algebra / Ed. by Pohang, Korea 2002.
10. Robertson A.R. The CIE 1976 color-difference formulate // Color
Res., 1977. V. 2. P. 7–11.
11. Laamanen H., Jaaskelainen T., Hauta-Kasari M., Parkkinen J.P.,
Miyata K. Imaging spectrograph based spectral imaging system //
Proc. 2nd European conf. on color in graphics, imaging, and vision,
and sixth international symposium on multispectral color science.
Aachen, Germany, 2004. P. 427–430.
254
Jetsu T., Heikkinen V., Hauta-Kasari M., Parkkinen J.P.
University of Joensuu, Finland
Estimation of n-dimensional reflectance spectra
from RGB data using polynomial model
Abstract. In the case of digital cameras, device dependent values
describe the camera’s response to incoming spectrum of light. Transforma-
tion from one device space to another has to be defined separately
in each case. Device dependent values are not colorimetric and don’t
necessarily provide a good starting point for transformation between
device spaces. We converted the device dependent digital camera RGB
values to reflectance spectra, which is used as the device independent
color representation. We calculated the corresponding results also for
direct RGB-CIELAB conversion. We have modeled the conversion from
one color space to another as a regularized polynomial regression
problem.
1. Introduction. Digital color cameras capture the spectrum of
physical stimuli by filtering the incoming color signal through color
filters with different spectral transmittances. In the case of digital
cameras (non-colorimetric), the device dependent RGB values describe
this response to color. If we want to transform camera RGB values to
device independent space, we need to define the mapping separately for
each device. This mapping can be done for example via least-squares
regression method. Values in device independent color spaces like CIE
XYZ, CIELAB and sRGB are light source dependent, so we should
calculate separate representations for each illumination condition. If we
convert the device dependent RGB values to reflectance spectra, by using
spectra it is possible to calculate any needed color information using
arbitrary light sources.
We have modeled and tested the conversion between color spaces as
a regularized polynomial regression problem [1]. The goal of our study
was to investigate whether reflectance-estimation method can be used
for color camera calibration. We tested the model using training sets of
different sizes. We calculated the results for polynomial transformation
in explicit spectral reconstruction and in CIELAB reconstruction. This
method has been used for example in [3] and [4]. Method was evaluated
255
by using a colorimetric measure and a spectral measure with values from
two digital cameras.
2. Color Science Terms. When we want to calculate different color
coordinate representations [5] from measured spectra φ(λ), the CIE XYZ
tristimulus values are used as a starting point. Tristimulus values X, Y
and Z can be calculated using formulas (1) – (3):
X = k
λ
φ(λ)x(λ)S(λ)dλ,
(1)
Y = k
λ
φ(λ)y(λ)S(λ)dλ,
(2)
Z = k
λ
φ(λ)z(λ)S(λ)dλ,
(3)
where parameter
k =
100
λ
S(λ)y(λ)dλ
,
(4)
φ(λ) is examined spectrum and x(λ), y(λ) are z(λ) color matching
functions of CIE standard observer. Relative spectral power distribution
of light source S(λ) is also needed in calculations.
In CIELAB color space L
∗
axis describes the lightness of color, a
∗
is red-green axis and b
∗
is blue-yellow axis. CIELAB coordinates can be
calculated from tristimulus values using formulas (4) – (6), where X
N
,
Y
N
and Z
N
are tristimus values of the reference white.
L
∗
= 116 · f
Y
Y
N
− 16,
(5)
a
∗
= 500 f
X
X
N
− f
Y
Y
N
,
(6)
b
∗
= 200 f
Y
Y
N
− f
Z
Z
N
,
(7)
f (ω) =
(ω)
(1/3)
,
ω > 0.008856
7.787(ω) +
16
116
, ω ≤ 0.008856
,
(8)
We have used the following error measure for evaluating the CIELAB
and spectral estimation:
256
∆E =
(L
∗
− ˜
L
∗
)
2
+ (a
∗
− ˜a
∗
)
2
+ (b
∗
− ˜b
∗
)
2
,
(9)
where L
∗
, a
∗
, and b
∗
are the original CIELAB values, and ˜
L
∗
, ˜a
∗
, and ˜b
∗
are in CIELAB case the estimated CIELAB values and in spectral case
CIELAB values calculated from estimated spectra. Color difference ∆E
in CIELAB space is widely used in industrial applications for example
in quality control purposes. ∆E limit for accurate color measurements is
usually around 0.5 – 1. If ∆E value is below 3, difference between colors
in practical applications is considered quite small. ∆E values between 3
and 6 are still reasonable, and values over 6 describe usually disturbing
color difference.
3. Polynomial Model. In polynomial transformation we have
equation
XW = Y,
(10)
where the transformation matrix W maps the camera response values
(matrix X ∈ R
l×3
) to CIELAB values (matrix Y ∈ R
l×3
) or high-
dimensional spectra (matrix Y ∈ R
l×n
). Here l is the number of
samples and n denotes the number of components in the spectrum.
Unknown coefficients of this model can be obtained from least squares
approximation using pseudo-inverse approach and known RGB-CIELAB
or RGB-spectrum pairs for calculation. So the solution for the problem
10 can be calculated as
W = (X
T
X)
−1
X
T
Y.
(11)
Method solves for the training set min
W
XW − Y
F
, where
F
denotes the Frobenius norm. This linear model can be extended to
higher order polynomials by adding terms R
2
, G
2
, B
2
, RG, RB, GB, . . .
to matrix X [3], [4]. In testing phase, we used 1
st
, 2
nd
, 3
rd
and 4
th
degree
polynomials with 3, 10, 20 and 35 terms, respectively. Models with 10,
20 and 35 terms include also a constant term 1.
It is possible that for the higher order polynomials, the solution
starts oscillating and overfitting is obtained because the polynomial
adapts to the given training data too accurately but fails to generalize
257
well for test data. Effect of noise in the measured data also provides
false information for the estimated function. Regularization is a method
where we use some additional constraints for limiting the capacity of
the resulting function to overfit the data. In Tikhonov regularization
[2] we add the regularization parameter λ to normal equations. The
corresponding matrix equation is
(X
T
X + λI)W = X
T
Y,
(12)
where we can solve the matrix W . Truncated singular value decomposition
[2] (or principal eigenvector method) is another simple regularization
method for the capacity control. This means that we discard small
singular values of matrix X, when computation for matrix W is
performed
X = U SV
T
=
k
i=1
σ
i
u
i
v
T
i
=
p
i=1
σ
i
u
i
v
T
i
+
k
i=p+1
σ
i
u
i
v
T
i
=
= U
1
S
1
V
T
1
+ U
2
S
2
V
T
2
.
(13)
We can use only first p singular values for the calculation of matrix
W . Pseudoinverse X
†
for matrix X is calculated as X
†
= V
1
S
−1
1
U
T
1
.
We tested these both methods, and concluded that the Tikhonov
methods performs slightly better than the truncated SVD. Generally
the difference between these two methods was very small, so the final
results have been presented only for Tikhonov method.
4. Experiments. We tested the performance of polynomial model in
two cases: when estimating 1) CIELAB values and 2) spectra from RGB
[1]. We evaluated the color difference between original and estimated
data using CIELAB ∆E error measure. We tested how the number
of polynomial terms affects the estimation performance, and if the
regularization by Tikhonov regularization would improve the results.
For testing purposes, we had RGB data of GretagMacbeth Color-
Checker (24 samples) and Munsell Book of Color – Matte Finish
Collection (1269 samples) acquired with Fujifilm Finepix S1 Pro and
Canon A20 Powershot digital cameras under daylight simulation light
source. Spectra of both sets were sampled from 400nm to 700 nm with
5 nm step. Munsell spectra are from University of Joensuu Color Group
Spectral Database [6].
258
At first, Munsell set was divided into 3 parts: training, testing and
validation set consisting of 635, 317 and 317 samples, respectively. We
used random sets of 200, 50, and 24 samples from Munsell training set
and Macbeth set as final training sets. For each set size, we tested two
different sets picked randomly from Munsell training set. Regularization
parameter for Tikhonov regularization and degree of polynomial were
chosen so that ∆E errors for test set were minimized. Chosen model
parameters were validated using a separate validation set. Numerical
results for validation set are presented in Tables 1 and 2.
Table 1. Error values for Fujifilm camera
∆E / CIELAB est.
∆E / spectral est.
Training Set
Avg.
Std.
Max.
Avg.
Std.
Max.
Munsell 200 / I
1.95
1.06
7.84
2.56
1.92
11.07
Munsell 200 / II
1.98
1.06
6.90
2.11
1.23
7.72
Munsell 50 / I
2.20
1.26
11.60
3.37
2.47
13.18
Munsell 50 / II
2.31
1.36
10.40
4.10
3.71
20.21
Munsell 24 / I
2.73
1.48
9.99
4.29
3.70
19.60
Munsell 24 / II
2.66
1.60
11.63
3.50
3.00
18.24
Macbeth
4.99
2.80
16.85
6.49
3.26
17.80
Table 2. Error values for Canon A20 camera
∆E / CIELAB est.
∆E / spectral est.
Traning Set
Avg.
Std.
Max.
Avg.
Std.
Max.
Munsell 200 / I
3.66
2.40
13.65
3.07
2.11
12.45
Munsell 200 / II
3.16
1.94
12.18
2.87
1.83
11.87
Munsell 50 / I
5.24
3.89
23.32
4.74
3.15
22.16
Munsell 50 / II
4.52
2.78
15.77
4.74
3.57
19.91
Munsell 24 / I
6.92
4.56
23.64
5.33
4.49
27.81
Munsell 24 / II
6.65
4.44
25.36
5.24
3.56
18.49
Macbeth
6.43
3.50
19.59
6.20
3.48
20.02
5. Conclusions. The performance of color calibration via spectral
estimation depends on camera. For Canon A20 camera, the spectral
estimation gives better results than the direct CIELAB estimation. The
behavior is opposite for Fujifilm camera, which clearly shows stronger
results in direct CIELAB estimation in terms of maximal color difference.
There are large differences between the two cameras. In overall, the color
calibration results for low-cost Canon are worse than for the Fujifilm
camera.
259
Regularization is important when we use higher order polynomials
and small training sets for the transformation. On the other hand, large
regularization terms have to be used usually only in cases when the
degree of the polynomial is already too high for the training set. If the
degree of polynomial was properly chosen, the effect of regularization
was small or it wasn’t needed at all.
Size of the training set is obviously a very important factor in the
training process. Also the "quality" of the training set is an important
part of the model. As the size of the training set becomes larger, the
performance of polynomial model improves. When small training set is
used, it must be chosen carefully. It can be seen that there are quite
large deviations between results for the smaller training sets with same
size. With randomly chosen small training sets polynomial model is a
very unstable method.
References
1. Jetsu T., Heikkinen V., Parkkinen J.P., Hauta-Kasari M.,
Martinkauppi B., Lee S.D., Ok H.W., Kim C.Y. Color calibration
of digital camera using polynomial transformation // Accepted to
CGIV06 – 3rd European conference on colour in graphics, imaging,
and vision in leeds, UK, June 19 – 22, 2006.
2. Neumaier A. Solving Ill-conditioned and singular linear systems, a
tutorial on regularization // SIAM Review, 1998. V. 40. P. 636–666.
3. Stigell P., Miyata K., Hauta-Kasari M. Wiener estimation method
in estimation of spectral reflectance from RGB images // Pattern
recognition and image analysis, 2004. V. 15, № 2. P. 327–329.
4. Connah D.R., Hardeberg J.Y. Spectral recovery using polynomial
models // Proceedings of the SPIE, 2005. V. 5667. P. 65–75.
5. Wyszecki G., Stiles, W.S. Color science: concepts and methods,
quantitative data and formulae. USA, John Wiley & Sons, Inc., 1982.
6. University
of
Joensuu
color
group,
spectral
database.
http://spectral.joensuu.fi/
260
Krasavin K., Parkkinen J.P., Jaaskelainen T.
University of Joensuu, Finland
Digital watermarking of images
Abstract. In this study we evaluate different techniques for digital
watermarking of images. Specifically we study watermarking for RGB
images in mobile applications and watermarking for spectral images
in industrial applications. In digital watermarking for mobile devices
we consider a watermarking technique in spatial domain. We tried to
find out is there a difference in human evaluation of visual quality
of watermarked image, depending on image properties and properties
of a display. In digital watermarking for spectral images we study a
technique for embedding a watermark in frequency domain and evaluate
it’s robustness for compression attack.
1. Introduction. Copyright protection is becoming more important
as the digital imaging is dominating over the analog one. Digital
presentation allows preserving the quality of images after image processing
operations. Copying can be done easily and fast and the quality of
the copy does not differ from the original. Digital watermarking offers
a possibility to prevent illegal copying and distribution of the digital
media [1]. As a complementary part to cryptography, the watermarking
technique is protecting the data by embedding a watermark in such that
it does not disturb the image in normal image perception conditions.
The embedded watermark can be extracted for authenticating purposes.
Some of the techniques are embedding the watermark in the spatial
domain, others embed the watermark in the transform domain. Spectral
color imaging is an imaging method, where color of an object is
represented more accurately than in the traditional RGB images.
Spectral imaging is becoming a practical tool in many applications, e.g.
digital commerce, industrial quality control, and digital museum. For
spectral images the requirements set by the various applications may be
diverse.This study confesses the requirements of the reality and as such,
provides a practical approach to watermark embedding.
2. Digital watermarking for mobile devices. In this study
we consider a technique for watermarking in spatial domain presented
by Bruyndonckx [2]. The method is based on the modification of the
luminance values inside blocks. The size of the block depends on the
261
size of the watermark and the size of the original image. The choice
of the block which will contain watermark information is determined
by a secret key. The pixels of the selected blocks are classified in three
groups of homogeneous luminance. Pixels are grouped also based on their
spatial position in the block. The robustness of the embedded watermark
is defined by a required difference αbetween the mean values. For our
experiments we chose α= [5,7,10,12,15].
Test images. For our experiments we selected three different images
that represent a range of characteristics: one image is an image of human
face, the second one is a technical and artificial image, and the third one
is a high detail image (Fig.1). We chose an image "Lena"as a human
face image, map image as an artificial image, and an image "Sailor
boats"as a high detailed image. Every image is watermarked with five
different magnitudes. For experiments, we used images with sizes relative
to display resolution of a particular device. In order to have a similar
image distortion of watermarked image on different devices, we increased
the size of watermark according to dimensions of a particular image.
Fig.1. Test images: Lena, Map, Sailorboats. At the upper row are the original
images, and at the bottom row are the watermarked images with magnitude
equaling to 15
Visual assessment testing. We tested a number of watermark
magnitudes for a range of image types. For the visual assessment test we
chose three images and five magnitudes of watermark. The watermarked
262
image was shown on a mobile phone, PDA, and CRT display with
176x208, 240x320, and 1024x768 display resolutions, respectively. The
quality of watermarked image was evaluated by 20 human observers.
In an image quality assessment experiment, the subject is asked to
classify watermarked images into a number of descriptive categories. A
modified subjective mean opinion score (MOS) is used for measuring
the quality of a watermarked image.For each watermarked image, the
scores are averaged among all observers to obtain the MOS grade for
a specific image. In Fig.2,left, the MOS-scores for a specific image are
shown. The normalization of individual quality scores is done using the
z-score transform, which indicates the deviation from the mean score as
a number of standard deviations. In Fig.2,right, the z-scores for an each
image are shown.
Fig.2. MOS-scores and z-scores
263
3. Digital watermarking for spectral images. Compared to
RGB images, spectral images have higher dimension of the data.
This gives us more possibilities for watermarking purposes itself. The
watermark can be a binary image, a gray-scale image, or a spectral
image. In this study we used a gray-scale image as a watermark.
The embedding procedure follows the method described in [3]. The
watermark is embedded in the 3D wavelet transform domain. First,
a three-dimensional wavelet transform I
wt
of the spectral image is
computed. Then, a two dimensional wavelet transform W
wt
of the
watermark W is computed. The spatial size of the watermark is equal to
the spatial size of the transformed block of the image. The transformed
values of the watermark are added to the values of the transformed block
B
wt
of the spectral image, resulting to the watermarked block:
B
wt,wm
= B
wt
+ α · W
wt
.
(1)
For each pixel of the watermark a suitable band from the transformed
block is selected. The band holding the median of the respective pixels
among all the bands is selected to store the watermark pixel. In this way
we try to ensure that the watermark is not stored in a high-energy or a
low-energy component of an image. The strength of the watermarking is
controlled by a parameter α, which is calculated as a multiplication of
three parameters:
α = α
1
· α
2
· α
3
.
(2)
The multiplier α
1
depends on the frequency content of the wavelet
transformed band b and it is calculated with respect to a contrast
sensitivity function of the selected band to minimize perceptual error
in the watermarked image:
α
1
=
S
b
max
b
√
S
b
,
S
b
=
u,v
C(u, v)|F
b
(u, v)|
2
,
264
C(u, v) = 5.05e
−0.178(u+v)
(e
0.1(u+v)
− 1),
where C(u, v) is the contrast sensitivity matrix with frequencies u and v.
F
b
(u, v) is the discrete Fourier transform of the band b in the block.The
multiplierα
2
controls the strength of the watermark. The larger the
multiplier α
2
is, the better the watermark survives from attacks. The
value for α
3
is calculates as mean over all pixels and channels of the
original image I. Increasing of α
2
value makes embedded watermark more
robust, but at the same time the signal to noise ratio for the watermarked
image is decreasing.The spectral image is reconstructed by the inverse
3D DWT now containing the watermark.Watermark extraction is an
inverse operation to the embedding procedure.
Resistance to compression attack. The proposed embedding
procedure was applied to watermarking. In the experiments, a large
number of spectral images was watermarked and then compressed
with different bit rates. The embedded watermark was extracted from
the compressed and reconstructed watermarked images. The signal
to noise ratio was calculated between the original image and the
compressed watermarked image. For the extracted watermark, the
correlation coefficient between the original and extracted watermark was
calculated. Also the signal to noise ratio for the extracted watermark was
calculated as a measure of quality. The compression procedure described
above was applied to the watermarked images. We used 8 principal
components of the spectral image and wavelet compression with bit
rate [4 0.25 0.01562] bits per pixel, which result to compression ratios
[16, 256, 4096]. In the first experiment we wanted to find a reasonable
range for α
2
.In this experiment we used a large set of spectral images,
but the results were averaged to define only one common value of α
2
for all images.For the watermarked images, the signal to noise ratios
were calculated. In Fig.3, left, SNR of the compression is shown. SNR
was calculated between the watermarked image and the compressed
watermarked image. The α
2
coefficient increasing results in small SNR
degradation compared to compressed with the same bitrate image. In
265
Fig 3, right, SNR of watermarking is shown. For both figures, at x-
axis there is the α
2
coefficient, and y-axis there is the value of the
SNR. For the extracted watermarks, the signal to noise ratios and the
correlation coefficients were calculated. In Fig 4, left, SNR between the
original and the extracted watermark is shown, and in Fig 4, right, the
correlation coefficient between the original and the extracted watermark
is shown with respect to α
2
, averaged over all images. At the figures,
the x-axis represents the α
2
value. The y-axis represents the SNR or
correlation coefficient respectively, between the original watermark and
the extracted watermark. According to the experiment the reasonable
range for α
2
would be from 0.01 to 0.08. With smaller values the
embedding is too weak against compression and the quality of the
extracted watermark is too low for registration. The values larger than
0.08 would output an image with a visible watermark.
Fig.3. SNR of the PCA/Wavelet compressed images without watermarks(left),SNR
of compressed images with watermarks(right)
Fig.4. SNR (left) and correlation coefficient(right) of the extracted watermarks
with respect to α
2
value
266
4. Conclusions. Digital image watermarking is becoming a practical
tool in many areas. In this study we considered both mobile and
industrial applications of watermarking. In watermarking for mobile
devices we evaluated visual quality of watermarked images, displayed
on mobile devices.In watermarking for spectral image we embedded
the gray-scale watermark in a spectral image in the three-dimensional
wavelet transform domain. The properties of the watermarking on a
large set of spectral images were studied. Especially we studied the
robustness of the embedded watermark against PCA/wavelet-based
compression. For embedding parameter values we defined a range where
the embedding shows robust operation.
References
1. Ingemar J. Cox, Matthew L. Miller, Jeffrey A. Bloom. Digital
Watermarking. USA, San Diego: Academic Press, 2002.
2. Bruyndonckx O., Quisquater J.J., Macq B.M. Spatial method for
copyright labeling of digital images // Proc. of IEEE Workshop on
nonlinear signal and image, 1995. P. 456–459.
3. Kaarna A., Parkkinen J.P. Digital watermarking of spectral
images with three-dimensional wavelet transform // Proc. of the
Scandinavian conference on image analysis, SCIA 2003, Goteborg,
Sweden, June 29 – July 2, 2003. P. 320–327.
4. Krasavin K., Parkkinen J.P., Jaaskelainen T. Digital watermarking
for mobile devices. Society for information display, 2006.
267
Lehtonen J., Hauta-Kasari M., Parkkinen J.P.,
Jaaskelainen T.
University of Joensuu, Finland
Spectral image format for data communications
Abstract. Spectral images are becoming more and more common.
However, these images require a lot of memory and therefore, those can
not be transferred efficiently in ordinary network. Here, an image format
for data communications is presented.
1. Introduction. Color is usually represented on three-dimensional
color coordination For example, RGB coordination is widely used.
However, many applications need more accurate color information like
in telemedicine, e-commerce, quality control, or archiving images of
cultural heritage objects [1]. Three-dimensional color representations
have a lot of limitations. One problem is controlling color under
different illuminations. Two different colors may look the same in certain
illumination and different in other illumination [2]. This phenomenon
is called metamerism. Three-dimensional color gamut is also device
dependent, and all needed colors cannot be displayed [3]. However, these
problems can be solved by using a spectral representation [2, 4]. Here, the
light intensity in different wavelengths are measured and digitally saved.
For example, color measured from visual 380. . . 780 nm wavelength range
with 5 nm interval gives a vector of 81 light intensity values that
represents the measured color spectrum.
Spectral image is a digital image that contains a saved color spectrum
in every pixel. However, these kind of images take a lot of memory
and transferring them trough an ordinary network is time consuming.
Therefore, spectral image compression is needed, and it is coming more
and more important [5, 6, 7]. In the literature, some compression
methods use Principal Component Analysis (PCA) [8, 9] or Independent
Component Analysis (ICA) [10, 11]. From these studies, it can be seen
that 5. . . 10 basis vecors is needed for saving the color spectra with high
accuracy.
Human visual system is more sensitive to spatial resolution in
acromatic channel than in chromatic channels [12]. In JPEG compression
method, colors are represented in YCbCr color coordinate system, where
acromatic information is in Y channel and cromatic information is in Cb
268
and Cr channels. The subsampling is done only to Cb and Cr channels
[13].
In this paper, a simple compression method based on PCA and JPEG
based subsampling is shown. Because of the simplicity, this is a good
method for spectral image browsing, where the compressed images are
stored in a server and the user browses the images in the client side
[14]. The theory of this compression method is explained and some test
results of compressing two small spectral databases are shown.
2. Spectral image compression. The compression of spectral
images is a two phase method. At first, PCA is used to spectral images to
form eigenimages. Then, spatial subsampling is used to the eigenimages.
These steps can also be done vice versa, giving also faster algorithm, but
the steps are same in both ways [14]. Also ICA can be used instead of
PCA.
Let I be the spectral image with k pixels, where the pixels are ordered
to a vector. Each pixel value is a n-dimensional color spectrum. Let R
be the correlation matrix of image I
R =
k
i=1
I
i
I
T
i
,
(1)
where I
i
is ith spectrum in image I. Next, we can calculate the
eigenvectors τ of the image. From this set of eigenvectors, the m first
eigenvectors ordered by the largest eigenvalues are saved to form the basis
vectors (τ
1
, . . . , τ
m
) of the spectral image, where τ
i
is the ith eigenvector
of the ordered set. The eigenimages P are then formed with equation
P = (τ
1
, . . . , τ
m
)
T
I.
(2)
In JPEG compression, the following subsampling schemes are used
for Y CbCr images: 4:4:4, 4:2:2, 4:2:0 or 4:1:1 [13]. The marking A:B:C
defines that every row is divided to blocks of A pixels, where B pixels
are chosen from every block in odd rows and C pixels from every block
in even rows. The subsampling is done only to chromatic channels Cb
and Cr. For example 4:2:0 scheme is then same as dividing the image in
2×2 blocks and saving the upper-left corner values from every block. See
figure 1. Here, the size of blocks in different schemes are shown. The black
circles shows the pixels that are saved. Here, we use the subsampling for
eigenimages so that the first eigenimage carrying the highest amount
269
of data is untouched and therefore treated like Y image, but all other
eigenimages are subsampled and therefore treated like Cb and Cr images.
Also bigger block sizes, such as 3 × 3 blocks were used, where the saved
pixel is upper-left corner pixel, center pixel or block average (or block
median).
Достарыңызбен бөлісу: |