INTRODUCTION
Fabric defect detection is one of the most important procedures effecting the
manufacturing efficiency and quality in fabric industry. So far, most factories
accomplish this study by human vision which makes the quality of fabric devoid
of consistency and reliability. It is found that even a highly trained inspector
can only detect about 70% of the defects at a speed of 1520 m min^{1}
(SariSarraf and Goddard, 1999). As the development
of image processing technology, many methods base on texture analysis were proposed
for defect detection. Several statistical methods using texture features, such
as fractal dimension (Conci and Proenca, 1998; Bu
et al., 2009), morphological feature (MallickGoswami
and Datta, 2000; Chandra et al., 2010) and
cooccurrence matrix (LatifAmet et al., 2000;
Lin, 2010) were proposed to discriminate defects from
normal texture. Cohen et al. (1991) used the
Gauss Markov Random Field (GMRF) to model the texture pattern of nondefective
fabric image and the defect is detected by the rejection of the model using
hypothesis testing theory. Because of the high degree of periodicity of the
fabric texture, the spectral approaches are also used for fabric defect detection.
Chan and Pang (2000) used Fourier transform for this
task which turned out to be only suitable for global defect detection because
of its poor local resolution in the frequency domain. Defect detection methods
with multiresolution decomposition using a bank of Gabor filters were proposed
by Bodnarova et al. (2002) and Kumar
and Pang (2002). As Gabor filter banks will lead to redundant features at
different scales, the orthogonal wavelet transform (Nadhim,
2006) are also used for texture characterization (Loum
et al., 2007), classification (Raju et al.,
2008) and fabric defect detection (Yang et al.,
2002; Mingde and Zhigang, 2011).
The defect detection can be considered as a oneclass classification problem.
Ohanian and Dubes (1992) and Randen
and Husoy (1999) provide some occasions that, in the viewpoint of texture
classification, GLCM outperforms other features such as Gabor filters feature,
fractal feature, MRF feature. However GLCM is still degraded for its high computational
complexity. In this study, an adaptive quantization method is proposed base
on the texture model of mixture of two Gauss distribution. The quantization
method reduces the graylevel from 256 levels to several levels which greatly
reduces the computation complexity of GLCM features. The SVDD classifier is
used as a detector for defect detection. Several machine learning based texture
classification methods have been proposed using neutral network (Kumar,
2003; Jianli and Baoqi, 2007; Chandra
et al., 2010; Nagarajan and Balasubramanie, 2008;
Mahi and Izabatene, 2011), Support Vector Machine (SVM)
(Kumar and Shen, 2002; Chu et
al., 2011) and SVDD (Bu et al., 2009,
2010). The neutral network and support vector based
methods considered the defect detection as a twoclass classification problem
which required both defective and nondefective samples for training. However,
the requirement of large quantities of defective samples is usually not desirable
for online inspection in industrial occasions. Similar to SVM, the SVDD classifier
is also a kernel function based classifier which does not suffer the problem
of local minima. However, it is a oneclass classifier which requires only the
nondefective sample for training, thus it is adopted in our algorithm.
FABRIC TEXTURE MODEL
Figure 1a and b show two samples of nondefective
texture of plain and twill fabrics, respectively. It can be see that the plain
and twill texture are, respectively, made up of one and two gray tones which
is also indicated in their histograms in Fig. 1c and d
by solid lines. The fabric texture model is built on the histogram of graylevel
image of fabric texture. The histogram of the plain fabric texture tends to
obey the Gauss distribution and the histogram of the twill fabric texture tends
to obey a mixture of two Gauss distributions. So, the Probability Density Function
(PDF) of graylevel in the plain fabric image can be modeled as:
where, μ and σ are the mean and standard deviation of the Gauss distribution, respectively. And the PDF of graylevel in the twill fabric image can be modeled as:
where, P_{1}+P_{2} = 1 and μ_{1}≤μ_{2}.
P_{1}, μ_{1}, σ_{1} and P_{2}, μ_{2},
σ_{2 }are the proportions, means and standard deviations of two
Gaussians in the mixture, respectively.

Fig. 1(ad): 
Nondefective samples of (a) plain, (b) twill fabrics and
(c, d) corresponding histogram and fitting results 
In order to build a consistent model for both plain and twill fabrics, only
Eq. 2 is used and Eq. 1 is considered as
a particular case of Eq. 2 with P_{1} = 1, P_{2}
= 0, μ_{1} = μ_{2} and σ_{1} = σ_{2}.
And parameters of mixed Gauss distribution in Eq. 2 can be
estimated by curve fitting. The fitting results of both plain and twill fabric
image histograms are illustrated in Fig. 1c and d
with dash lines, respectively.
GRAYLEVEL COOCCURRENCE MATRIX AND ADAPTIVE QUANTIZATION
Spatial graylevel cooccurrence estimates image properties related to secondorder
statistics. It turns out that GLCM has become one of the most wellknown texture
features and is widely used for texture characterization (Shang
et al., 2011; Sheng et al., 2010).
A cooccurrence matrix is a square matrix whose elements correspond to the relative
frequency of occurrence of pairs of gray level values of pixels separated by
a certain distance in a given direction. The GxG graylevel cooccurrence matrix
P_{d} for a displacement vector d = (dx, dy) is defined as follows.
The entry (I, j) of P_{d} is the number of occurrences of the pair of
gray levels I and j which are a distance d apart. Formally, it is given as:
where, I denotes an image of size UxV with G gray values (x_{1}, y_{1}) (x_{2}, y_{2}), UxV (x_{2}, y_{2}) = (x_{1}+dx, y_{1}+dy) and . is the cardinality of a set. The cooccurrence matrix reveals certain properties about the spatial distribution of the gray levels in the texture image.
Generally, for a graylevel image whose pixel is represented by an 8bit integer
(i.e., G = 256), its GLCM is a matrix of size 256x256. Extracting features from
such a large matrix is quite computational expensive. An intuitive way to reduce
the size of GLCM is to reduce the gray level G of original image by equal quantization
(LatifAmet et al., 2000). However, during the
equal quantization if the gray level of the defect is close to the normal texture,
it is probably quantized to the same level as the normal texture which makes
it hard to distinguish. Figure 2a and b
illustrate a defective fabric image and its equal quantization result with 16
levels, respectively. The quantized image is histogram equalized for visual
convenience. It can be seen that the defect in Fig. 2b becomes
ambiguous and hard to detect.
In this study, an adaptive quantization method is proposed based on the fabric texture model elaborated in earlier. As shown in Fig. 1, the histogram of graylevel of normal texture obeys a mixture Gauss distribution which contains two Gaussians. Due to the randomization of texture intensity, to which Gaussian a single pixel belongs and the distance between its graylevel and the center of its belonging Gaussian which is considered as Gaussian Central Distance (GCD), is of more importance than in which exact graylevel the pixel is. The pixel in the image is quantized by examining its GCD. Prior to the calculation of GCD, we should decide which of the two Gaussians the pixel belongs to. We consider that if graylevel g<μ_{1}, then it belongs to the first Gaussian and if g>μ_{2 }then it belongs to the second Gaussian. For the graylevel μ_{1}<g<μ_{2}, the following probability criterion is used to determine its belonging Gaussian:
where, ω_{i}, I = 1and 2, denote first and second Gaussian, respectively,
P (ω_{i}) is the prior probability of each Gaussian, i.e., P (ω_{1})
= P_{1} and P (ω_{2}) = P_{2 } in Eq.
2.

Fig. 2(ac): 
Comparison of quantization methods 
If P (ω_{1}g)>P (ω_{2}g), then g is considered
to belong to first Gaussian, otherwise it belongs to the second Gaussian. This
classification method is known as Bayes classification which minimizes the misclassification
rate. Formally the adaptive quantization method consists of following steps:
Step 1: 
Divide both Gaussians into several nonoverlapping intervals
separately formulated as [0, μ_{i}(0.5+N) λ σ_{i}),
[μ_{i}+(0.5+n) λ σ_{i}, μ_{i}+(0.5+n+1)
λ σ_{i}) and [μ_{i}+(0.5+N) λ σ_{i},
255], where n = (N+1), N, …, N1. λ is a constant determining
the width of the interval and each Gaussian is divided into 2N+3 intervals 
Step 2: 
For each graylevel g from 0 to 255, determine which Gaussian it belongs
to and which interval of the Gaussian it belongs to, then vote for that
interval. The intervals which are never voted are discarded and the completely
voted intervals are preserved. Here, the complete voted interval refers
to the interval that all the graylevels within it only vote for itself 
Step 3: 
For the rest intervals which are not completely voted. If any of them
overlaps with any other noncomplete voted intervals, then merger them together
to form a new interval. If the merged interval overlaps with any complete
voted interval, then truncate the overlapping areas of the merged interval
to make sure that all the intervals are nonoverlapping. Finally, index
all the intervals with zerobased numbers and make sure that larger graylevels
correspond to larger index. Each interval forms a new quantized level. 
Figure 3 gives an example of adaptive quantization procedure.
Figure 3a and b illustrate the division
results of two Gaussians of a mixture distribution with λ = 1, N = 2. Each
Gaussian is divided into 7 intervals. The final quantization result is illustrated
in Fig. 3c. The intervals [0, 54), [54, 64), [64, 74) and
[74, 84) of the first Gaussian and [99, 125), [126, 151), [151, 176) and [176,
255] of the second Gaussian are completely voted intervals. Thus each of them
directly forms a quantized level in Fig. 3c. The intervals
[84, 94) of the first Gaussian and [74, 99) of the second Gaussian are noncompletely
voted intervals and they overlap with each other. So they are merged into [74,
99]. As [74, 84) is a completely voted interval, the merged interval is truncated
into [84, 94) which forms a new a quantized level in Fig. 3c.
Finally, 256 graylevels are quantized to 9 levels.

Fig. 3(ac): 
Example of adaptive quantization procedure 
The pixel is quantized according to its GCD which is measured by the standard
deviation of the belonging Gaussian rather than its graylevel. Pixels belonging
to the same Gaussian with similar GCD rather than similar graylevels tend to
be quantized to the same level. Particularly, quantized level 0 and L1 are
two special levels, where L denotes the total number of quantized levels. It
can be seen that quantized level 0 is out of the confidence interval of the
first Gaussian which means, in the sense of hypothesis testing, the pixels within
this quantized level do not belong to the normal texture. As the pixels in this
level are much darker than the normal texture, they are called darkness exceptional
pixels. Similarly the pixels in quantized level L1 are called lightness exceptional
pixels. The advantage of adaptive quantization method is that it can preserve
most useful information of the original image while reduce the 256 graylevels
to several quantized levels. Besides, as the quantization is based on GCD of
the texture model, the defective region does not obey the texture model and
thus contains more lightness or darkness exceptional pixels, so the quantization
method can also emphasize the presence of the defects. Figure
2c shows the result of adaptive quantization of Fig. 2a
with only 9 quantized levels which is even better than Fig. 2b
with 16 levels.
SUPPORT VECTOR DESCRIPTION DATA (SVDD)
Support vector data description is a powerful kernel method that has been commonly
used for novelty detection (Tax and Duin, 2004). It
provides a solution to the oneclass classification problem without any negative
sample in training. By mapping the data into a higher dimensional space, the
objective of SVDD is to find, in this space, a spherically shaped boundary around
the training dataset such that the sphere can enclose as many samples as possible
while having minimum volume. The sphere is characterized by its center c and
radius R>0. Let {x_{i}}, I = 1, 2,.., M, be a set of training examples,
x_{i}εR^{d}, d being the dimension of the input space and
M is the number of training samples. The minimization of the sphere volume is
achieved by minimizing its square radius R^{2}. To allow for the presence
of outliers, slack variables ξ_{i}’s are introduced so that
the problem of constructing an optimal separating hypersphere is converted to
the following optimization problem:
where, ξ_{i} accounts for possible errors, v is a userprovided parameter specifying an upper bound on the fraction of outliers and controls the tradeoff between the hypersphere volume and the errors and Φ is a map function which maps input data into higher dimensional space. The corresponding dual problem is:
where, α = {α_{1}, α_{2},…, α_{M}} is called lagrange multiplier vector, K(,) is called kernel function such that K(x_{i}, x_{j}) = Φ (x_{i})·Φ (x_{j}) and a most widely used kernel function is the Radial Basic Function (RBF):
The optimization problem in Eq. 6 can be solved using standard quadratic programming methods and an optimal solution for α can be obtained. All x_{i} corresponding to nonzero α_{i} are called Support Vectors (SV) which usually occupy a small quantity of the training data. Then the optimal solution for c is:
On testing, for a new sample x, it is subjected to the map function Φ and if the distance from the mapped data to the center of the optimal hypersphere is smaller or equal than the radius of the hypersphere, then it is accepted, otherwise it is rejected which can be formulated as:
Because optimal hypersphere is found by solving a quadratic programming problem, the SVDD do not suffer the problem of local minimum which means that SVDD training can always find a global minimum, thus has a good generalization capability.
DEFECT DETECTION ALGORITHM
Detection of detects is considered as a oneclass classification problem. The defect detection algorithm is divided into two phases: Learning phase and classification phase. In the learning phase the fabric image is quantized and a set of GLCM features, as well as some new features are extracted from the quantized image to characterize the fabric texture. The extracted features are used as training data for SVDD training to generate a SVDD classifier. In classification phase the same features are extracted from the quantized testing image and subjected to the SVDD classifier to determine whether it is defective or not.
Feature extraction: Several features extracted from GLCM are used in
the proposed detection algorithm. Generally, for GLCM, the displacement parameter
(dx, dy), by which two highly related pixels are departed, is a good selection
to characterize the fabric texture. A natural selection of displacement parameters
for texture characterization is (0, 1) (1, 1) (1, 0) and (1, 1) which are also
used for fabric texture characterization by LatifAmet et
al. (2000) and Lin (2010), since the neighboring
pixels are considered highly related. These four displacement parameters correspond
to 0, 45, 90 and 135°, respectively, where most defects are present. Chan
and Pang (2000) find out from the frequency spectrum of the fabric texture
image that texture periodicities are existing in the warp (0°) and fill
(90°) direction in the fabric texture which can be calculated as the reciprocal
of the first harmonic frequency f_{0} and f_{90} along warp
and fill direction, respectively. Because of the highly periodicity of the fabric
texture, two pixels departed by a texture periodicity are also considered highly
related. Therefore, two additional displacement parameters are used in our algorithm:
(0, T_{0}) (T_{90}, 0), where T_{0} and T_{90}
denote the texture periodicities at 0 and 90°, respectively, that is T_{0
}= 1/f_{0}, T_{90}= 1/f_{90}. f_{0} and
f_{90} can be obtained from the 1D Fourier spectrum of fabric texture
at 0 and 90°, respectively. Figure 4 shows a Fourier spectrum
of a real fabric texture at 0°, the fabric image is made zero mean in advance
to suppress the Direct Current (DC) component. Because T_{0} and T_{90}
are usually floatingpoints while displacement parameters must be integers for
the computation of GLCM, the firstorder linear interpolation is used to estimate
the values of I (x+dx, y+dy) where dx, dy are floatpoints. Haralick
et al. (1973) proposed 14 features from GLCM for texture classification,
in this study only two of them, namely contrast CON and inverse difference moment
IDM, are used:
where, L is the column (or row) number of GLCM (i.e., the total quantized levels), p (I, j) refers to the normalized entry of the cooccurrence matrices. That is p (I, j) = P_{d}(I, j)/R, where P_{d} (I, j) is the GLCM with displacement parameter d and R is the total number of pixel pairs (I, j).
In addition, we propose 2 extra features for each displacement parameter (dx,
dy) based on following conceptions. As discussed in earlier the pixel in the
original image is quantized by its GCD and the pixel with larger GCD means more
unlikely it belongs to the normal texture and its intensity tend to be much
darker or lighter than the normal texture. It is found that defects, within
their boundaries, tend to have more dark pixels (e.g., oilstain, dirtyyarn,
etc.) or light pixels (e.g., misspick, thinplace, etc.) than the normal texture.
In turn large quantities of dark pixels or light pixels within a local region
may indicate a defect in that region. Figure 5 shows a data
fragment of quantized image of Fig. 2a with L = 8. All the
lightness exceptional pixels whose quantized level is larger than a threshold
UL which corresponds to the upper limit of 70% confidence interval of the second
Gaussian, are marked by rectangles. There are mainly three reasons for appearance
of lightness exceptional pixels: variation of normal texture, the noise and
the real defects.

Fig. 4: 
Magnitude of frequency spectrum of fabric image along horizontal
direction 

Fig. 5: 
Data fragment of quantized image of Fig. 2a 
Because of the random property of variation of normal texture and the noise,
lightness exceptional pixels and darkness exceptional pixels tend to scattered
within the fabric image. However, the real defects tend to be continuous and
constitute a small portion of field. Thus it is obvious that the connected lightness
exceptional pixels in the center of Fig. 5 indicate a defect
corresponding to the mispick in Fig. 2a and other scattered
lightness exceptional pixels are caused by the variation of normal texture or
the noise. A feature is proposed to characterize this property of defect.
Given a displacement parameter (dx, dy), let Q be the quantized image, u_{k} = x+k·dx, v_{k} = y+k·dy, if for all k = 0, 1, …, C1, Q (u_{k}, v_{k}) are larger than UL and for k = 1 and C, Q (u_{k}, v_{k}) are not larger than UL, then points in position (u_{k}, v_{k}) (k = 0, 1,…, C 1) are considered as a Lightness Exceptional Run (LER). C denotes the length of the LER. The feature Long Lightness Exceptional Run Emphasis (LLERE) is defined as:
where, S_{l} denotes a set containing all the LERs whose length is
larger than l in quantized image matrix. C_{s} and s (k) denote the
length and the kth element of the LER s. The feature LLERE is similar to the
feature long run emphasis of Gray Level Run Length Matrix (GLRLM) (Galloway,
1975) which has been widely used for texture characterization and classification.
They are both built on the statistic of consecutive pixels with the same attribution.
However, in Eq. 11 the LERs whose length is smaller than
or equal to l which are probably caused by the variation of normal texture or
the noise, are not involved in the computation so that LLERE can emphasize the
presence of real defects. l is set to the 95th percentile of the order statistics
obtained from nondefective image samples. Similarly, another feature called
Long Darkness Exceptional Run Emphasis (LDERE) which is the counterpart of LLERE
is also used as a feature:
where, Z_{l} denotes a set containing all the Darkness Exceptional Runs (DER) whose length is larger than l in quantized image matrix. The definition of DER is similar to LER except that the values of its elements are smaller than a threshold DL which corresponds to the lower limit of 70% confidence interval of the first Gaussian. C_{z} and z (k) denote the length and the kth element of the DER z. Compared to the features extracted from GLCM, feature LLERE and LDERE also characterize the relationship of pixels separated by a certain distance in a given direction but they put more emphasis on the continuousness of exceptional pixels along that direction which makes them suitable to detect tiny directional defects. In summary 6 displacement parameters are used and 4 features are extracted for each displacement parameter. In all 24 features are extracted to form a feature vector.
Learning phase: Images of nondefective texture are used in the learning phase. All these images are divided into nonoverlapping subregions. The feature vectors V_{m}, m = 1, 2, …, M are extracted from subregions using the feature extraction method proposed earlier, where M denotes the total number of nondefective subregions. Then the feature vectors are normalized as:
Where:
r = 0, 1, 2, … is the feature index of the feature vector, NV_{m}
denotes the normalized feature vector, η_{r} and β_{r}
are called offset coefficient and scale factor of the normalization, respectively.
The objective of the normalization is to set the values of all the features
in the feature vector within interval [0, 1] and make sure that all the features
in the feature vector have nearly the same weight. Then these feature vectors
are subjected to the SVDD training. During the training, there are two parameters
should be decided: the tradeoff parameter v in Eq. 6 and
the RBF width parameter σ in Eq. 7. A large v allows
more outliers of the hypersphere in the training dataset which corresponds to
larger rejection rate and larger fraction of support vectors. Tax
and Duin (2004) find out that the fraction of support vectors relates to
the false alarm rate, so the parameter v can be decided by the expected false
alarm rate. To choose the optimal value of parameters σ, the cross validation
method is used in the SVDD training.
Classification phase: All the fabric images under inspection are divided into nonoverlapping regions of the same size as in learning phase. From each subregion, a feature vector E is extracted using the feature extraction method proposed earlier. Then the feature vector E is normalized using η_{r }and β_{r} which have been computed in the learning phase:
Different from the normalization in learning stage, the values of normalized features NE (r) can be less than zero or greater than one. The normalized feature vector NE is then subjected to the SVDD classifier and the final classification result can be acquired by Eq. 8. If its output is 1, then the subregion is considered as nondefective otherwise it is considered as defective.
RESULTS AND DISCUSSION
Three datasets containing one plain and two twill fabrics of different texture background are used to evaluate the performance of the fabric detection algorithm. All the three fabrics and defects on them are produced in factory practice. All of the images are acquired by line scan CCD camera with a spatial resolution of 0.2 mm/pixel against a backlighting illumination and digitalized into 256x256 pixels with a gray level of 256. Detailed information of the three datasets is presented in Table 1. Each image is divided into nonoverlapping subregions of size 64x64 pixels and each subregion is considered as one sample for defect detection.
In the training phase, for each dataset, 1000 nondefective samples are used
for SVDD training. The training parameter v is set to 0.01 and σ is decided
by 10fold cross validation. Figure 6a and b
illustrate the cross validation accuracy and the proportion of support vectors
with various values of σ^{2}, where σ^{2} = 2^{3},
2^{4},…, 2^{15} are selected as candidates which is suggested
by Hsu et al. (2003). It can be seen from Fig.
6a that both too small and too larger values of σ^{2} result
in low cross validation accuracy. Large value of σ^{2} will create
a simple decision boundary with small proportion of support vectors which is
unable to separate the defective and nondefective samples and leads to high
miss rate. While small value of σ^{2} will create an excessive
complex decision boundary with large proportion of support vectors which leads
to overfitting and high false alarm rate. σ^{2} = 2^{7},
2^{6} and 2^{4} with highest cross validation accuracy are selected
for the three dataset, respectively and their corresponding support vector proportion
are 1.3, 2.3 and 1.3%, respectively which only occupy a very small portion of
total training vectors. It can be seen from Fig. 6a that the
values of cross validation accuracy nearby the optimal value of σ^{2}
are nearly stationary, so the cross validation method for finding optimal σ^{2}
is robust.
Figure 7 illustrates the adaptive quantization and detection
results of several typical defective samples of three datasets with quantization
parameter λ = 1, N = 2. Dataset 1, 2 and 3 are quantized to 8, 9 and 10
levels, respectively.
Table 1: 
Information of experimental datasets 

We can see that after the quantization all the defects are still clear and
intact but the number of graylevels is decreased from 256 to 8, 9 and 10, respectively
which greatly reduced the computational load of GLCM and its features.
Quantization parameter selection: As mentioned earlier, there are two
important parameters affecting the procedure of quantization. One is the interval
width λ, the other one is N which relates to the total number of quantized
levels. The interval [μ_{i}(0.5+N) λ σ_{i}),
μ_{i}+(0.5+N) λ σ_{i}] constitutes a confidence
interval of the Ith Gaussian. Graylevels out of this interval are considered
either dark exceptional or light exceptional. Large value of N corresponds to
more computational load, while small value of N may loses some detailed information
of the defects and makes them hard to detect. In order to find the optimal parameters,
several pairs of (λ, N) are used to the three datasets. The detection results
of the three datasets of different parameter pairs of (λ, N) are presented
in Table 24, respectively. λ is set
inversely proportional to N so that the confidence intervals of all parameter
pairs are nearly the same. As the false alarm relates to the tradeoff parameter
v in Eq. 6, it does not have large variation for different
parameter pairs. For N = 0 to 2, the miss detection rate decreases intensively
which means the detection performance is getting much better, while for N =
3 to 6 the miss detection rate does not have great improvement and it changes
inversely to the false alarm rate which means the detection performance does
not greatly improved but adding more computational load. As the increase of
value of N, more detailed information of fabric texture is preserved. For defects
missyarn of Dataset 1 and dirtyyarn of Dataset 2 only small value of N is
sufficient to defect most of them, because they contain lots of dark or light
exceptional pixels whose graylevel are out of the confidence interval.

Fig. 6(ab): 
The cross validation accuracy and the proportion of support
vectors with various values of σ^{2} 

Fig. 7(al): 
(a, b) Misspick and missyarn of Dataset 1, (c, d) dirtyyarn
and hole of Dataset 2, (e, f) misspick and thinplace of Dataset 3 and
(dl) respectively; corresponding quantization images and detection result 
Table 2: 
Detection results of Dataset 1 with different parameter pairs
(λ, N) 

Values in brackets are percentage 
Table 3: 
Detection results of Dataset 2 with different parameter pairs
(λ, N) 

Values in brackets are percentage 
Generally, N = 2 and 3 are optimal parameters which can be used to detect most
of the defects.
Characteristic of features: The proposed algorithm use all 24 features
described, to detect different kinds of defects. In order to know the specific
characteristic of each feature, we also investigate on the discriminative features
for each kind of defects.
Table 4: 
Detection results of Dataset 3 with different parameter pairs
(λ, N) 

Values in brackets are percentage 
Table 5: 
Discriminative features of each kind of defects in Fig.
7 

Here, the discriminative features refer to the normalized features, among all
24 features which have large distances between the defective samples and nondefective
samples. A criterion function is used to characterize this distance which is
formulated as:
where, U_{n} and σ_{n} are the mean value and standard
deviation of the normalized feature extracted from the nondefective subregions
in the learning phase (Eq. 13), U_{d} is the mean
value of the normalized feature extracted from defective subregions in classification
phase (Eq. 16). The higher magnitude of J indicates larger
distance between the feature of the defect and normal texture which means better
detection performance. Table 5 shows the discriminative features
and their values of J for each kind of defect in Fig. 7, only
the displacement parameters and features which have large J are presented, others
are omitted. Generally, the value of J in Table 5 is consistent
with the detection rate in Tables 24, that
is larger value of J corresponds to higher detection rate. It also suggests
that the features LLERE and LDERE are more effective than CON and IDM to detect
tiny directional defects such as misspick of Dataset 1, dirtyyarn of Dataset
2 and thinplace of Dataset 3, because CON and IDM characterize the global texture
pattern and not quite sensitive to the local textural change caused by tiny
defects. However, LLERE and LDERE emphasize the continuousness of lightness
exceptional pixels and darkness exceptional pixels, respectively, so they are
suitable to characterize tiny directional defects. All of features LLERE, LDERE,
CON, IDM with displacement parameter (0,1) have large value of J for defect
missyarn, because the texture pattern of missyarn is quite different from
normal texture and containing lots of dark and light exceptional pixels in horizontal
orientation. Particularly, for the defect thinplace in Fig. 7e
which is not even obvious to the human inspector and thus very difficult detect,
its defective pixels (lightness exceptional pixels) are arranged periodically
in horizontal direction, so feature LLERE with displacement parameter (0, T_{0})
can characterize it.
Realtime performance: In order to evaluate the realtime performance
of our detection algorithm, a prototyped defect detection system is built in
our laboratory. The architecture of the detection system is illustrated in Fig.
8. Linescan camera Dalsa SP14 with pixel resolution of 2048 is used to
capture the image of fabrics moving on a conveyor belt. The localization resolution
is 0.2 mm/pixel, so each camera can cover 0.4 cm transversal and four cameras
are used to cover 1.6 m transversal. An encoder is implemented to synchronize
the scan rate of the cameras with the movement speed of the fabrics. The image
data of the camera are transferred to the image acquisition and processing card
via camera link interface. The processing unit is a Digital Signal Processor
(DSP) TI TMSC6713 operating at 300 MHZ and the proposed detection algorithm
is implemented in it.

Fig. 8: 
The architecture of prototyped defect detection system 

Fig. 9: 
Real time performance of the detection algorithm 
All the detection results are uploaded to a host computer for display via PCI
bus.
Compared to the generic CPU in host computer, DSP has better realtime performance
benefiting from its specialized architecture such as hardware multiplier and
instruction pipeline. Here, we only focus on the computational time in the classification
phase and the timeconsuming procedures in the learning phase such as parameter
evaluation of the texture model, SVDD training and cross validation are not
taken into consideration, because the learning phase is finished before the
realtime inspection, thus usually not timeconstrained. The classification
phase consists of two parts: feature extraction and decision. The computational
time of feature extraction relates to the total number of quantized levels.
Feature extraction time for subregion of size 64x64 with different values of
L is presented in Fig. 9. For comparison both computational
time implemented in DSP and generic CPU P4 3.0 G are presented. The computational
time of nonquantized feature extraction with 256 graylevels is 45844 μs
in DSP and 29635 μs in generic CPU. We can find out that the adaptive quantization
greatly reduces the computational complexity of feature extraction and improve
the realtime performance of the algorithm. The decision involves classification
of SVDD classifier which is implemented by software LIBSVM (Chang
and Lin, 2001). According to Eq. 8, only the support vectors
which usually occupy a small quantity of the training data (Fig.
6b), are involved in the computation of classification. Moreover, LIBSVM
uses a lookup table cache for the computation of kernel function in Eq.
7, so that the decision time is greatly reduced and only occupies a small
portion in the detection algorithm. It can be seen from Fig. 9
that as L increases, the computational time increases nonlinearly and according
to Table 24, L increases with the increase
of N. However, when N is greater than 3, the detection rate does not increase
obviously, so the parameter N = 2 or 3 is a good selection compromising between
the detection rate and the realtime performance. The detection speed can achieve
as fast as 40 m min^{1}.
CONCLUSIONS
A new approach of textural fabric defect detection algorithm using SVDD has
been demonstrated. A fabric texture model of mixed Gaussian distribution was
built on the graylevel histogram of fabric image. Two GLCM features and two
novel features were used to characterize the fabric texture pattern and emphasize
the presence of the defects. An adaptive quantization method base on the texture
model was proposed to reduce the size of GLCM, so that the computational complexity
of feature extraction was tremendously reduced. The specific property of each
feature was also discussed. User can remove unnecessary ones to further improve
the realtime performance. The SVDD classifier was used as a detector and achieved
good detection results for three datasets in experiment. A prototyped defect
detection system, with high performance DSP as its processing unit, was built
to evaluate the realtime performance of the proposed algorithm, experiment
indicated that the detection speed can reach as fast as 40 m min^{1},
thus the proposed algorithm is suitable for online inspection in industrial
occasions.
ACKNOWLEDGMENT
This study was supported by Open fund of Image Processing and Intelligent Control Key Laboratory of Education Ministry of China.