Biometric authentication has become increasingly popular in security system. An iris recognition system is proposed in this paper. It is proposed to extract the features by using Gray Level Co-Occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRLM) for different direction from the normalized iris region. This proposed approach is non filter based iris recognition technique and is invariant to iris rotation. Support vector machine is used for classification. Experimental results show that the fusion of GLCM and GLRLM features gives better accuracy as compared to individual feature extraction.
Index Terms--- Feature extraction, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Support Vector Machine (SVM).
INTRODUCTION
Biometric recognition is the automated recognition of individual based on the physiological and behavioural characteristics. The recognition can be positive or negative. The human iris is an annular part between the pupil and the white sclera. Iris based recognition system can be non-invasive to the users since the iris is an internal organ as well as externally visible, which are of great importance for real-time approach [4].
Iris has the following features.
1. Stable-The unique pattern in the human iris is formed by 10 months of the age and remains unchanged throughout one's lifetime.
2. Unique-The probability of two persons iris does not produce the same pattern. Each person's left iris is differ from right iris.
3. Flexible-Iris recognition technology easily integrates into existing security systems or operates as a standalone.
4. Reliable-A distinctive iris pattern is not susceptible to theft, loss or compromise.
RELATED WORK
John Daugman [1] [2] in 1988 developed a recognition system. The algorithm is based on Iris codes generated using 2D Gabor wavelet. Hamming distance was used for matching. The accuracy obtained in the iris recognition system is found to more.
Wildes [3] in 1997 applied a laplacian of Gaussian filter at multiple scales to produce template and normalized correlation for matching. Boles [4] in 1998 presented a new algorithm based on zero crossing. In this algorithm the zero crossing of the wavelet transform are calculated at various resolution levels over concentric circles on the iris. Resulting one dimensional signal are then compared with the model features using different dissimilarity function. Nawal Alioua et. al. [5] in 2011 presented a method for eye state analysis using iris detection based on Circular Hough Transform.
A T Zaim and M.K.Quweider [12] in 2006 present a new method for iris texture recognition for the purpose of human identification. Presented the methods for feature extraction in iris using Gray Level Co-Occurrence Matrix. The GLCM of each iris is calculated and normalized to further minimize the effect of constant shift in gray level intensities. Kaushik Roy, Prabhir Bhattacharya and Ramesh Chandra [8] in 2007 proposed an improved iris recognition method to identify the person accurately by using novel iris segmentation scheme.1D log-Gabor wavelet technique is used for feature extraction and Support Vector Machine (SVM) is used as iris pattern classifiers. They proposed a SVM as a classifier is far better than the performance of backpropogation neural network (BPNN), k-nearest neighbor (KNN), Hamming distance and Mahalanobis distance.
Zhonghua Lin and Bibo Lu [16] in 2010 suggested the iris recognition method based on the optimized Gabor filter. The iris image was pre-processed and normalized. The feature vector was created using iris code and hamming distance method was used for recognition and matching. Sohail, A. S and Sudhir P [10] in 2011 presents a new approach of extracting local relative texture feature from ultrasound medical images using the Gray Level Run Length Matrix(GLRLM) based global feature. Significant improvement has been noticed by traditional GLRLM-based feature extraction method.
Manavalan, R., and Thangavel, K [13] in 2012 presented the Evaluations of Textural Feature Extraction from GRLM for Prostate Cancer TRUS Medical Images. Experiment was done on Transrectal Ultrasound images. Feature vector was created using GLRL matrix for different direction from the segmented region. Support vector was used for classification. Accuracy was found nearly 85% to 100% when feature vector was created using GLRLM in combined direction.
In this paper we have used efficient segmentation and normalization methods. GLCM and GLRLM features are extracted. Support vector machine is used here as iris pattern classifiers. The parameter selection of SVM plays a very important role to improve the overall generalization performance.
The paper is organized into following sections. Section2 gives a overview of proposed methods. Section 3 gives Experimental results and Section 4 gives conclusion about the experiment.
2. PROPOSED METHOD
An iris recognition system has following sub systems: i) Image pre-processing ii) Feature Extraction iii) Classification. For pre-processing canny edge detection, circular hough transform and rubber sheet method is used. In the Feature extraction phase GLCM and GLRLM features are extracted. Support vector machine is used as a classifier.
2.1Iris Image pre-processing:
The acquired image that contains irrelevant parts like eyelid, eyelash, pupil etc should be removed. The original as in fig. 1 needs to be pre-processed. Pre-processing contains 2 steps: Image Segmentation and Normalization.
Figure 1:Original Eye Image
Image Segmentation:
Two methods are used under the image segmentation.
1. Canny Edge Detection: Canny Edge detection method developed by John F.canny [15] in 1986.It has become one of the standard edge detection methods and it is still used in research. The purpose of edge detection is to reduce the amount of data in an image. Edges are those places in an image that correspond to object boundaries. Edges are pixels where image brightness changes abruptly.
These algorithm rums in 5 steps:
1. Smoothing: Blurring of an image to remove noise.
2. Fiinding gradients: The edge should be marked where the gradients of the image has large
Magnitude.
3. Non-maximum suppression: Only local maxima should be marked as edges.
4. Double Thresholding: Potential edges are determined by thresholding.
5. Edge tracking by hysteresis: Final edge determined by suppressing all edges that are not connected to a very certain edge.
2. Circular Hough Transform:
The Hough transform is a feature extraction technique used in image analysis. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by voting procedure. The Hough Transform uses an array called accumulator. The process of finding circles in an image consists to use a modified Hough Transform called Circular Hough Transform. This method is used to get a circular portion of the iris portion [5]. Only circular portion of the iris can be extracted as shown in fig 2 using circular Hough transform method.
Figure 2:Circular representation of Iris
Normalization:
Captured images can be different size that affects in recognition result. In order to become uniform size the circular pattern of the iris in rectangular representation is called normalization. Normalization also reduces the distortion caused by pupil movement. Normalization is done by Daugman's Rubber sheet method [11].Fig 3 represents the conversion of circular representation into rectangular representation.
Daugman suggested normal Cartesian to polar transformation that maps each pixel in the iris area into a pair of polar coordinates (r, Θ) where r and Θ are on the intervals of [0 1] and [0 2π].
Figure 3:Daugman's Rubber Sheet Model.
Figure 4.Iris normalized into polar coordinates.
Unwrapping can be formulated as:
(1)
with
(2)
(3)
where Iis the iris region image, are original Cartesian coordinates, are corresponding normalized polar coordinates and , and , are the coordinates of the pupil and iris boundaries along the direction. As shown in fig 4 the normalized image used is in 240*20 dimensions.
2.2 Feature Extraction:
Gray Level Co-Occurrence Matrix:
GLCM is the second order statistics that can be used to analysing image as a texture. GLCM also called as gray tone spatial dependency matrix. The idea behind GLCM is to describe the texture as a matrix of pair gray level probabilities .It is calculated from the normalized iris image using pixels as primary information. The GLCM is a square matrix of size G X G. where G is the number of gray levels in the image. The GLCM contains information about the positions of pixels having similar gray level values [6].
The (i, j)rh element of the matrix is generated by finding the probability that if the pixel location (x,y) has gray level Ii then the pixel location (x+dx ,y+dy) has a gray level intensity Ij. The dx and dy are defined by considering various scales and orientations. The probability of co-occurrence of gray levels m and n for two pixels with a defined spatial relationship in an image is calculated in terms of distance d and angle .
A co-occurrence matrix is a two dimensional array p ,in which both the rows and columns represents a set of possible image values. A GLCM is defined by first specifying a displacement vector and counting all pairs of pixels separated by d having gray levels i and j. The GLCM is defined by
(4)
1
2
3
4
5
6
7
8
1
2
1
0
0
0
0
0
0
2
2
0
0
0
1
1
0
0
3
0
0
0
1
0
0
0
0
4
1
1
0
0
1
0
0
0
5
0
0
0
1
0
1
0
1
6
0
0
1
0
0
0
2
0
7
0
0
0
0
0
0
1
0
8
0
1
0
0
0
0
0
0
where is the number of Co-Occurrence of the pixel values lying at distance d and angle in the image.
1
1
5
6
7
2
5
4
2
1
4
5
8
2
1
6
3
4
1
1
1
2
6
7
7
Table 1.Matrix of the Image Table 2.Gray Level Co-Occurrence Matrix with d=1 and Θ=0˚
The following table 1 shows the matrix of the image, with the gray level 8.In the table 2 output GLCM, element (1,1) contains value 2 because there are two instance in the input image where two horizontally adjacent pixels have values 1 and 1. glcm(1,2) contains the value 1 because one instance where horizontally adjacent pixels have the values 1 and 2,continues the same process for the remaining input values, scanning the image for other pixel pairs(i,j) and recording the sums in the corresponding elements of the GLCM.
GLCM can be formed for the direction of 0Ëš, 45Ëš, 90Ëš and 135Ëš. Gray level Co-Occurrence matrices capture properties of a texture but they are not directly useful for further analysis, such as comparison of two textures. Numeric features are computed from the Co-Occurrence matrix that can be used to represent the texture more compactly [14].
The following features are extracted from Gray Level Co-Occurrence Matrix:
1. Energy: f1=
2. Contrast: f2=
3. Correlation: f3=
4. Homogeneity: f4=
5. Autocorrelation: f5=
6. Dissimilarity f6 =
7. Inertia f7=
Gray Level Run Length Matrix:
GLRLM is a matrix from which texture features can be extracted for texture analysis. A texture is a pattern of gray intensity pixel in a particular direction from the reference pixels. Run length is the number of adjacent pixels that have the same gray intensity in a particular direction. GLRLM is a two dimensional matrix where each element P (i,j | Θ) is the number of elements j with the intensity i in the direction . P . can be 0˚,45˚,90˚,135˚.
Gray
Level
Run Length(j)
1
2
3
4
1
3
1
0
0
2
2
1
0
0
3
2
2
0
0
4
1
1
0
0
Example:
1
4
3
3
3
2
3
1
1
1
4
4
2
1
2
2
Table 3. Matrix of the Image: Table 4. Gray Level Run Length Matrix:
For a given direction the run length matrix measures for each allowed gray level value how many times there are runs of, for example, the table 3 shows the matrix of the image with gray level 4.The GLRLM is calculated with the distance d=1 and Θ=0˚.In the table 4 output GLRLM, element (1, 1) contains 3 because three times value1 is occurred horizontally in the image matrix. glrlm (1, 2) contains the value 1 because the value run length (1 1) occurred horizontally 1 times. Similarly output is calculated based on the occurrence of the gray level horizontally in the direction .
The following features are extracted from Gray Level Run Length Matrix:
1. Short Run Emphasis (SRE):
2. Long Run Emphasis (LRE):
3. Gray Level Non-uniformity (GLN):
4. Run Length Non-uniformity (RLN):
5. Run Percentage (RP):
6. Low Gray Level Run Emphasis (LGRE)=
7. High Gray Level Run Emphasis (HGRE):
2.3 Support Vector Machine:
Support vector machine is a used for classification and regression. A SVM is binary classifier that separates the two classes of data.SVM are based on the concept of decision planes that define boundaries. A decision plane is one that separate between a set of objects having different class memberships. Figure 5 shows the binary class classification. There are two important aspects in the development of SVM as classifier. The first aspect is determination of the optimal hyperplane which will optimally separate the two classes and the other aspect is transformation of non-linearly separable classification problem into linearly separable problem [11].
Let data set and Yi €{1,-1} class label of xi where i =1,2..N.In the case of linear separable problem, there exists a separating hyperplane which defines the boundary between class 1(labelled as y=1) and class 2(labelled as y= -1).
The separating hyperplane is: (5)
Which implies, +b)>=1 i (6)
There are numerous possible values of that create separating hyperplane. In SVM only hyperplane that maximize the margin between two sets is used. Margin is the distance between the closest data to the hyperplane.
Figure 5: SVM with Linear separable data
The margins are defined as d+ and d-.The margin will be maximized in the case d+=d-Training data in the margins will lie on the hyperplane H+ and H-.The distance between hyperplane H+ and H- is,
(7)
A linear support vector machine is composed of a set of given support vector z and a set of weights w. The computation for the output of a given SVM with N support vectors z1,z2.........Zn and weighs w1,w2........wn is then given by:
(8)
A decision function is then applied to transform this output in a binary decision. Sign is used, so that outputs greater than zero are taken as a class and outputs lesser than zero are taken as the other [7]. When the data is not separable, slack variables ᶓ i are introduced into the inequalities for relaxing then slightly. so some points allow to lie within the margin or even being misclassified completely. The resulting problem is then to minimize,
(9)
The decision boundary can be found by solving the following constrained optimization problem minimize : L(w)= (10)
Subject to (11)
Once the problem is optimized the parameter of optimal hyperplane are
(12)
αi is zero for every xi except the ones that on the margin. The training data with non zero , αi are called support vector. If the number of training examples is large svm training will be very slow because the numbers of parameter. Alpha is very large in the dual problem. The kernel function is important because it creates the kernel matrix which summarizes all the data.
Multisvm: SVM does not generalize naturally to the multiclass classification.svm are binary classifiers they can only decide between two classes at once [7].In this work we apply multiclass svm to classify iris pattern due to its outstanding performance. There are many approaches to perform multiclass classification using svm. The approach we adopted here is one against all classification. In which constructs M svm classifiers with the ith one separating class i from all the remaining classes [9].One problem with this method is when the M classifiers are combined to make the final decision the classifier which generates the highest value from its decision function is selected as the winner and the corresponding class label is assigned without considering the competence of the classifiers. The outputs of the decision function are employed as the only index to indicate how strong a sample belongs to the class. Figure 6 represent the one against all method in svm.
3. EXPERIMENTAL RESULT
We have used the CASIA version 3.0 iris image database, each iris class is composed of 9 samples, Totally 20 classes taken for experiment, Experiment is done with respect to GLCM,GLRLM features individually and finally fusion of these two features are taken for experiments. Totally 100 samples taken for training and 80 samples taken for testing.
GLCM features are extracted in 0Ëš direction with d=1.Totally 7 features are extracted like energy, contrast, correlation, homogeneity, autocorrelation, dissimilarity, inertia.
Experiment is done with GLRLM features in 0Ëš, 45Ëš and 90Ëš direction. For classification support vector machine is used. This SVM only support for binary classification. For multiclass classification one against all Support Vector Machine is used. From the experiment it is found that nearly 90% accuracy can get by using svm classifier, for some set of samples 100% accuracy found. GLRLM features are extracted better in 0Ëš and 90Ëš direction. The experiment is done with the fusion of GLCM and GLRLM features in 0Ëš direction.
By analysing the graph it is found that GLCM, GLRLM and fusion of features needs minimum 5 classes to show better accuracy. As the number of classes get increased accuracy shows less due to overlapping of feature vector of training image.
From fig 7 to fig 10, shows the comparison of class number vs classification accuracy in percentage with GLCM, GLRLM and fusion of GLCM,GLRLM features. It is noted that performance of the fusion of features gives the better accuracy. Experiment is done with different K-fold. In fig 7 analysis is done with 5-1 fold. here 5 samples considered as training data from each class and 1 samples considered as test data from each class. Experiment is done with 10 classes, so totally 50 training samples considered and 10 test samples considered in 5-1 fold. Blue colour represents the accuracy obtained by GLCM features, Red colour represents the accuracy obtained by GLRLM features and Green colour represents the accuracy obtained by fusion of GLCM and GLRLM features. Similarly analysis is done with 5-2, 5-3 and 5-4 folds.
Comparison of the performance based on GLCM,GLRLM and Fusion of features:
Figure 7: Comparision of the performance with 5-1 fold
Figure 8: Comparison of the performance with 5-2 fold
Figure 9: Comparison of the performance with 5-3 fold
Figure 10: Comparison of the performance with 5-4 fold
Analysis Result:
K-folds/ FRR
GLCM
GLRLM
Fusion of GLCM and GLRLM
5-1
29.09%
25.45%
30.90%
5-2
37.27%
35.45%
30%
5-3
42.42%
41.8%
37.57%
5-4
40%
36.36%
32.27%
Table 5 Iris recognition results based on fusion feature, GLCM feature and GLRLM feature
Table 5 shows the test results that use the Gray Level Co-Occurrence features , use the gray Level Run Length features and use the Fusion of GLCM and GLRLM features. From the table we can conclude that the fusion of the Gray Level Co-Occurrence matrix and Gray Level Run Length Matrix feature is the best one in iris recognition.
4 . CONCLUSION
In this paper CASIA 3.0 database of gray scale eye images is used in order to verify the authorized user of iris recognition system. By the experiment it is found that non filter based technique can be successfully used for iris identification. The technique show it is invariant to iris rotation. One against all which constructs M binary classifiers to differentiate each class from the rest is a conventional method to extend svm from the binary to M-class classification. Classification accuracy is better in GLCM feature with 0Ëš and fusion of both GLCM and GLRLM features. Higher accuracy can be achieved by either increasing number of samples per class in the training phase or considering the fusion of GLCM and GLRLM features. According to the experimental results performed on iris detection method provides classification rates of 90% with the fusion of GLCM and GLRLM methods.
ACKNOWLEDGEMENT
We would like to thank the National Laboratory of Pattern Recognition Institute of Automation at the Chinese Academy of Sciences for granting us access to their database of human iris images.
RERFERENCES
[1] Daugman, J. (2004). How iris recognition works. Circuits and Systems for Video Technology, IEEE Transactions on, 14(1), 21-30.
[2] Daugman, J. G. (1988). Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. Acoustics, Speech and Signal Processing, IEEE Transactions on, 36(7), 1169-1179.
[3] Wildes, R. P. (1997). Iris recognition: an emerging biometric technology. Proceedings of the IEEE, 85(9), 1348-1363.
[4] Boles, W. W., & Boashash, B. (1998). A human identification technique using images of the iris and wavelet transform. Signal Processing, IEEE Transactions on, 46(4), 1185-1188.
[5] Alioua, N., Amine, A., Rziza, M., & Aboutajdine, D. (2011, April). Eye state analysis using iris detection based on Circular Hough Transform. In Multimedia Computing and Systems (ICMCS), 2011 International Conference on (pp. 1-5). IEEE.
[6] Gupta, G., & Agarwal, M. (2005). Iris recognition using non filter-based technique. Energy, 2, 1.
[7] Pushpalatha, K. N., Shashikumar, A. K. G. D., & ShivaKumar, K. B. (2012). Iris Recognition System with Frequency Domain Features optimized with optimized with PCA and SVM Classifier SVM Classifier SVM Classifier.
[8] Roy, K., Bhattacharya, P., & Debnath, R. C. (2007, December). Multi-class SVM based iris recognition. In Computer and information technology, 2007. iccit 2007. 10th international conference on (pp. 1-6). IEEE.
[9] Liu, Y., & Zheng, Y. F. (2005, July). One-against-all multi-class SVM classification using reliability measures. In Neural Networks, 2005. IJCNN'05. Proceedings. 2005 IEEE International Joint Conference on (Vol. 2, pp. 849-854). IEEE.
[10] Sohail, A. S. M., Bhattacharya, P., Mudur, S. P., & Krishnamurthy, S. (2011, May). Local relative GLRLM-based texture feature extraction for classifying ultrasound medical images. In Electrical and Computer Engineering (CCECE), 2011 24th Canadian Conference on (pp. 001092-001095). IEEE.
[11] Ali, H., & Salami, M. J. E. (2008, May). Iris recognition system by using support vector machines. In Computer and Communication Engineering, 2008. ICCCE 2008. International Conference on (pp. 516-521). IEEE.
[12] Zaim, A., Sawalha, A., Quweider, M., Iglesias, J., & Tang, R. (2006, May). A New Method for Iris Recognition using Gray-Level Co-Occurrence Matrix. In Electro/information Technology, 2006 IEEE International Conference on (pp. 350-353). IEEE
[13] Manavalan, R., & Thangavel, K. (1963). Evaluation Of Textural Feature Extraction From GRLM For Prostate Cancer TRUS Medical Images.
[14] Gui, F., & Ye-qing, W. (2008). An Iris Recognition Algorithm Based on DCT and GLCM. In Proceedings of SPIE, the International Society for Optical Engineering (pp. 70001H-1). Society of Photo-Optical Instrumentation Engineers.
[15] Yang, L., Dong, Y. X., Wu, Z. T., & Fei, L. Y. (2010, June). Eyelid Location Using Asymmetry Canny Operator. In Computer Design and Applications (ICCDA), 2010 International Conference on (Vol. 1, pp. V1-533). IEEE.
[16] Lin, Z., & Lu, B. (2010, October). Iris recognition method based on the optimized Gabor filters. In Image and Signal Processing (CISP), 2010 3rd International Congress on (Vol. 4, pp. 1868-1872). IEEE.