[ieee 2013 18th international conference on digital signal processing (dsp) - fira...

6
Are Infrared Images Reliable For Palmprint Based Personal Identification Systems? Abdallah Meraoumia 1 ,Salim Chitroub 2 and Ahmed Bouridane 3 1 Universit´ e Kasdi Merbah Ouargla, Laboratoire de G´ enie ´ Electrique. Facult´ e des Sciences et de la Technologie et des Sciences de la Mati` ere, Ouargla, 30000, Alg´ erie 2 Signal and Image Processing Laboratory, Electronics and Computer Science Faculty, USTHB. P.O. box 32, El Alia, Bab Ezzouar, 16111, Algiers, Algeria 3 School of Computing, Engineering and Information Sciences, Northumbria University, Pandon Building, Newcastle upon Tyne, UK. Email:[email protected], S [email protected], [email protected] Abstract—Several studies for palmprint-based person identifi- cation have focused on the use of palmprint images captured in the visible part of the spectrum. However, to a possible improvement of the existing palmprint systems, the proposed work concerned with the use of infrared palmprint images for the palmprint identification system. For that, a comparison of infrared palmprint images versus gray level and color image is given. At the features-extraction stage the features are generated by the method of Principal Component Analysis (PCA ). This feature-extraction technique has been widely used for pattern recognition, as well as in the field of biometrics. The proposed scheme is tested and evaluated using PolyU multispectral palm- print database of 400 users. Our experimental results show that the infrared spectrum achieves the best result. Also, color image present three spectrums, for that, we propose a score level and image level fusion schemes to integrate these colors information. Index Terms—Biometrics, identification, Palmprint, Infrared image, Principal component analysis, Data fusion. I. I NTRODUCTION T HE capability of automatically establishing the identity of individuals, called as person identification is essential to several applications such as: access control, surveillance systems, physical buildings and many more applications [1]. Traditional personal identification approaches which use some- thing that you know such as PIN, or something that you have, such as an ID card are not sufficiently reliable to satisfy the security requirements which may be faked or cracked [2]. Biometric recognition is emerging as a powerful means for automatically recognizing a person’s identity with a higher re- liability. Biometric recognition is based on anatomical features such as fingerprint, iris, face and palmprint, or behavioral traits such as gait or signature [3]. Unlike traditional approaches, biometrics has significant advantages due to biometric char- acteristics of an individual are not transferable and unique for every person and are not lost, stolen or broken. Recently, a number of biometrics-based technologies have been developed and hand-based person identification is one of these technologies. The human hand contains a wide variety of features, e.g., shape, texture, and principal palm lines that can be used by biometric systems [4]. These features are relatively stable and the hand image from which they are extracted can be acquired relatively easily. Furthermore, the identification systems based on hand features are the most acceptable to users. Palmprint identification is one kind of hand-biometric technology and it has proven to be a unique biometric identifier due to its stable and unique traits [5]. Several studies for palmprint-based personal recognition have focused on improving the performance of palmprint images captured under visible light (gray level and color images). However, during the past few years, some researchers have considered Near-InfraRed (NIR ) images to improve the effect of these systems [6]. Therefore, using near-infrared light we can collect the palm-vein images. Obviously, palm-vein is much harder to fake than palmprint. In this paper, we try to evaluate the usefulness of the NIR palmprint images for improving the palmprint based person identification systems. For that purpose, we propose several systems exploiting the visible light (gray level and color images) and near-infrared light. Then, a comparative study has been found for improving the efficiency of the NIR images for the palmprint based person identification systems. In this method, our palmprint identification system is based on features extracted from hand images by PCA technique. These features are extracted by projecting palm images into the subspace obtained by the PCA transform (called eigenpalm features). For multimodal biometric identification based on the color spectrum images, the fusion is performed at the image and matching-score levels. The rest of the paper is organized as follows. The proposed scheme of the uni-modal biometric system is presented in section 2. Section 3 gives a brief description of the region of interest extraction. Feature extraction is discussed in section 4. A sections 5 is devoted to describe the matching and normalization method. The fusion technique used for fusing the information is detailed in section 6. In section 7, the experimental results, prior to fusion and after fusion, are given and commented. Finally, the conclusions and further works are presented in sections 8. II. PROPOSED SYSTEM Fig. 1 shows the block-diagram of the proposed uni-modal biometric identification system based on the palmprint image. 978-1-4673-6195-8/13/$31.00 ©2013 IEEE

Upload: ahmed

Post on 14-Dec-2016

219 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

1

Are Infrared Images Reliable For Palmprint BasedPersonal Identification Systems?

Abdallah Meraoumia1,Salim Chitroub2 and Ahmed Bouridane3

1Universite Kasdi Merbah Ouargla, Laboratoire de Genie Electrique.Faculte des Sciences et de la Technologie et des Sciences de la Matiere, Ouargla, 30000, Algerie2Signal and Image Processing Laboratory, Electronics and Computer Science Faculty, USTHB.

P.O. box 32, El Alia, Bab Ezzouar, 16111, Algiers, Algeria3School of Computing, Engineering and Information Sciences, Northumbria University,

Pandon Building, Newcastle upon Tyne, UK.

Email:[email protected], S [email protected], [email protected]

Abstract—Several studies for palmprint-based person identifi-cation have focused on the use of palmprint images capturedin the visible part of the spectrum. However, to a possibleimprovement of the existing palmprint systems, the proposedwork concerned with the use of infrared palmprint images forthe palmprint identification system. For that, a comparison ofinfrared palmprint images versus gray level and color image isgiven. At the features-extraction stage the features are generatedby the method of Principal Component Analysis (PCA). Thisfeature-extraction technique has been widely used for patternrecognition, as well as in the field of biometrics. The proposedscheme is tested and evaluated using PolyU multispectral palm-print database of 400 users. Our experimental results show thatthe infrared spectrum achieves the best result. Also, color imagepresent three spectrums, for that, we propose a score level andimage level fusion schemes to integrate these colors information.

Index Terms—Biometrics, identification, Palmprint, Infraredimage, Principal component analysis, Data fusion.

I. INTRODUCTION

THE capability of automatically establishing the identityof individuals, called as person identification is essential

to several applications such as: access control, surveillancesystems, physical buildings and many more applications [1].Traditional personal identification approaches which use some-thing that you know such as PIN, or something that you have,such as an ID card are not sufficiently reliable to satisfy thesecurity requirements which may be faked or cracked [2].Biometric recognition is emerging as a powerful means forautomatically recognizing a person’s identity with a higher re-liability. Biometric recognition is based on anatomical featuressuch as fingerprint, iris, face and palmprint, or behavioral traitssuch as gait or signature [3]. Unlike traditional approaches,biometrics has significant advantages due to biometric char-acteristics of an individual are not transferable and unique forevery person and are not lost, stolen or broken.

Recently, a number of biometrics-based technologies havebeen developed and hand-based person identification is one ofthese technologies. The human hand contains a wide variety offeatures, e.g., shape, texture, and principal palm lines that canbe used by biometric systems [4]. These features are relativelystable and the hand image from which they are extracted can

be acquired relatively easily. Furthermore, the identificationsystems based on hand features are the most acceptable tousers. Palmprint identification is one kind of hand-biometrictechnology and it has proven to be a unique biometric identifierdue to its stable and unique traits [5].

Several studies for palmprint-based personal recognitionhave focused on improving the performance of palmprintimages captured under visible light (gray level and colorimages). However, during the past few years, some researchershave considered Near-InfraRed (NIR) images to improve theeffect of these systems [6]. Therefore, using near-infrared lightwe can collect the palm-vein images. Obviously, palm-veinis much harder to fake than palmprint. In this paper, we tryto evaluate the usefulness of the NIR palmprint images forimproving the palmprint based person identification systems.For that purpose, we propose several systems exploiting thevisible light (gray level and color images) and near-infraredlight. Then, a comparative study has been found for improvingthe efficiency of the NIR images for the palmprint basedperson identification systems. In this method, our palmprintidentification system is based on features extracted from handimages by PCA technique. These features are extracted byprojecting palm images into the subspace obtained by thePCA transform (called eigenpalm features). For multimodalbiometric identification based on the color spectrum images,the fusion is performed at the image and matching-score levels.

The rest of the paper is organized as follows. The proposedscheme of the uni-modal biometric system is presented insection 2. Section 3 gives a brief description of the region ofinterest extraction. Feature extraction is discussed in section4. A sections 5 is devoted to describe the matching andnormalization method. The fusion technique used for fusingthe information is detailed in section 6. In section 7, theexperimental results, prior to fusion and after fusion, are givenand commented. Finally, the conclusions and further works arepresented in sections 8.

II. PROPOSED SYSTEM

Fig. 1 shows the block-diagram of the proposed uni-modalbiometric identification system based on the palmprint image.

978-1-4673-6195-8/13/$31.00 ©2013 IEEE

Page 2: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

2

Fig. 1. Uni-modal palmprint identification system based on principal component analysis for modeling and classification.

Enrollment

Preprocessing Observation Matching Normalization Decision

Preprocessing Observation TrainingDatabase

(𝑎) (𝑏) (𝑐) (𝑑) (𝑒)

Fig. 2. Various steps in a typical region of interest extraction algorithm. (a) The filtered image, (b) The binary image, (c) The boundaries of the binaryimage and the points for locating the ROI pattern, (d) The central portion localization, and (e) The preprocessed result (ROI).

Gaussian filter Binary image Boundary Localization Extraction

In the preprocessing module, the Region Of Interest (ROI)is localized. For the enrolment phase, each ROI sub-imageis mapped into one dimensional signal (Observation vector).After that, these vectors are concatenated into two dimensionalsignal. This vector is transformed by the PCA transform intofeature space called eigenpalms space (Training module). Foridentification phase, the same feature vector is extracted fromthe test palmprint image and projecting into correspondingsubspace. Then euclidian distance is computed using all of thereferences in the database (Matching module). Finally, aftera normalization process, decision which person accepted orrejected is made.

III. PALMPRINT PREPROCESSING

In order to localize the palm area, the first step is to pre-process the palm images; we use the preprocessing techniquedescribed in [7] to align the palmprints. In this technique,Gaussian smoothing filter is used to smoothen the image be-fore extracting the ROI sub-image and its features. After that,Otsu’s thresholding is used for binarized the hand. A contour-following algorithm is used to extract the hand contour. Thetangent of the two stable points on the hand contour (theyare between the forefinger and the middle finger and betweenthe ring finger and the little finger) are computed and used toalign the palmprint. The central part of the image, which is128× 128, is then cropped to represent the whole palmprint.Fig. 2 shows the palmprint pre-processing steps.

IV. EXTRACTION OF THE EIGENPALM FEATURE

A. Principal Component Analysis

The PCA transform applied to a set of images, can be usedto find the subspace that is occupied by all of the images from

the analyzed set. The methodology for calculating principalcomponent is given by the following method [8]:

Let the training set of vectors of original data (each vectorwith dimension 𝑀 ), 𝑋 , be 𝑥1, 𝑥2, 𝑥3,⋅ ⋅ ⋅ , 𝑥𝑁 . First, computethe mean of original data of the set by: �� = 1

𝑁

∑𝑁𝑖=1 𝑥𝑖.

Second, subtract the mean from each original data to generatethe mean removed data by 𝜑𝑖 = 𝑥𝑖 − �� . Third, form thematrix using mean removed data of (𝑀×𝑁 ) dimension, 𝐷 =[𝜑1 𝜑2 𝜑3 ⋅ ⋅ ⋅𝜑𝑁 ]. Fourth, compute the sample covariancematrix (𝐶) of dimension (𝑀 × 𝑀 ), 𝐶 = 1

𝑁

∑𝑁𝑛=1 𝜑𝑛𝜑

𝑇𝑛 =

𝐷𝐷𝑇 and compute the eigen values of the covariance matrixand of the eigen vectors for the eigen values. Finally, keeponly the eigen vectors corresponding to 𝐿 largest eigen values.These eigen values are called as principal components.

B. Template Generation

The PCA applied to a set of images, can be used to findthe subspace that is occupied by all of the images from theanalyzed set. When the images are encoded into this subspaceand then returned to the original space, the error between thereconstructed and the original images is minimized.

To begin, we have a training set of 𝑁 ROI sub-images. Byreordering these ROI sub-images into one dimensional vector,𝑥𝑖, and concatenate all 𝑥𝑖, with 𝑖 = [1⋅ ⋅ ⋅𝑁 ], for obtaininga two dimensional vector, 𝑋 = [𝑥1, 𝑥2, 𝑥3,⋅ ⋅ ⋅ , 𝑥𝑁 ]. Theprocess of obtaining a single subspace consists of finding thecovariance matrix 𝐶 of the training set of ROI sub-images, 𝑋 ,and computing its eigenvectors. Each original ROI sub-imagecan be projected into this subspace. The eigenvectors spanningthe palm-space can be represented as images with the samedimensionality as the palm ROI sub-images used to obtainthese eigenvectors. These sub-images are called eigenpalms.

Page 3: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

3

V. MATCHING & NORMALIZATION PROCESS

A template in our system is represented by a palm featurevector. In order to identify a user, the matching processbetween the test template, 𝜑𝑡, and the templates from thedatabase, 𝜑𝑖, has to be performed. The matching between cor-responding feature vectors is based on the Euclidean distance.In this step, the following distance is obtained:

𝑑(𝜑𝑡, 𝜑𝑖) =√(𝜑𝑡 − 𝜑𝑖)(𝜑𝑡 − 𝜑𝑖)𝑇 (1)

where 𝑖 = 1, 2, ⋅ ⋅ ⋅ ⋅ ⋅𝑁 are palm templates from the databaseand 𝑁 is the total number of templates in the database.

During the identification process, the distance d between 𝜑𝑡

and all of templates in the database are computed, thereforethe vector, 𝒱 , given all these distance is given a:

𝒱 = [𝒱1 𝒱2 𝒱3 𝒱4 ⋅ ⋅ ⋅ 𝒱𝑁 ] (2)

An important aspect that has to be addressed in identificationprocess is the normalization of the scores obtained. Normal-ization typically involves mapping the scores obtained intoa common domain. Thus, a Min-Max normalization schemewas employed to transform the scores computed into similarityscores in the same range [9].

𝒱 =𝒱 −𝑚𝑖𝑛(𝒱)

𝑚𝑎𝑥(𝒱)−𝑚𝑖𝑛(𝒱) (3)

where 𝒱𝑁 denotes the normalized scores. However, thesescores are compared, and the lowest score is selected. There-fore, the best score is 𝐷𝑜 and its equal to:

𝐷𝑜 = min𝑖

(𝒱) (4)

Finally, this score is used for decision making. A threshold𝑇𝑜 regulates the system decision. The system infers that pairsof biometric samples generating scores lower than or equal to𝑇𝑜 are mate pairs. Consequently, pairs of biometric samplesgenerating scores higher than 𝑇𝑜 are non mate pairs.

VI. FUSION SCHEMES

Multimodal biometric recognition, in which two or moremodalities are used jointly, has been investigated and foundto increase robustness and thus improve the accuracy of suchrecognition system [10]. However, in the multi-modal systemdesign, these modalities operate independently and their resultsare combined using an appropriate fusion scheme. Thus, thefusion can be performed at different levels [11]. These are:fusion at image level, at the feature level, at score level andat decision level. In this paper we combined the modalities atboth image level and matching score level.

A. Fusion at Image Level

Image fusion is the process by which two or more imagesare combined into a single image [12]. The new fused imagehas a same dimensionality and represents a persons identityin a single image space. Several fusion techniques has beenproposed by various researchers. In our case, the spectrums ofthe color image (Red, Green and Blue) are combined. Thus,five different fusion schemes (Dwt, Pca, Laplacian, Gradiantand Contrast) are used for fusing these spectrums.

B. Fusion at Matching Score Level

The fusion of color spectrum images based sub-systemsis realized using five simple rules [13]. These rules consistof the sum (SUM) and WeigHTed-sum (WHT) of the twosimilarity measures, their MINimum (MIN) and MAXimum(MAX) of both and finally their MULtiplication (MUL). Thefinal decision of the classifier is then given by choosing theclass, which maximizes the fused similarity measures betweenthe sample and the matching base.

VII. EXPERIMENTAL RESULTS AND DISCUSSION

A. Experimental Database

The proposed method is validated on multi-spectral palm-print database from the Hong Kong polytechnic university(PolyU) [14]. The database contains images captured with visi-ble and infrared light. Four palmprint images for each person,including Red, Green, Blue and near-infrared spectrum, arecollected. 6000 multi-spectral palmprint images were collectedfrom 500 persons. These images were collected in two separatesessions. In each session, the person provide 6 images foreach palm, so there are 12 images for each person. Therefore,48 spectrum images of all illumination from 2 palms werecollected from each person. The average time interval betweenthe first and the second sessions was about 9 days.

B. Evaluation Criteria

The measure of utility of any biometric recognition systemfor a particular application can be explained by two values[15]. The value of the False Accept Rate (FAR) criterion,which is the ratio of the number of instances of differentfeature pairs of the traits found do match to the total numberof counterpart attempts, and the value of the False Reject Rate(FRR) criterion, which is the ratio of the number of instancesof same feature pairs of the traits found do not match tothe total number of counterpart attempts. It is clear that thesystem can be adjusted to vary the values of these two criteriafor a particular application. However, decreasing one involvesincreasing the other and vice versa. The system threshold valueis obtained using Error Equal Rate (EER) criteria when FAR =FRR. This is based on the rationale that both rates must be aslow as possible for the biometric system to work effectively.

Another performance measurement is obtained from FARand FRR, which is the Genuine Acceptance Rate (GAR).It represents the identification rate of the system. In orderto visually describe the performance of a biometric system,Receiver Operating Characteristics (ROC) curves are usuallygiven. A ROC curve shows how the FAR values are changedrelatively to the values of the GAR and vice-versa [16].Biometric recognition systems generate matching scores thatrepresent the degree of similarity (or dissimilarity) betweenthe input and the stored template.

C. Simulation Results

In the system-design phase (all experiments), three imagesare randomly selected of twelve images of each class (person)were used in the enrolment stage to create the system database;

Page 4: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

4

(𝑎) (𝑏)

Fig. 3. Uni-modal palmprint open set identification test results. (a) The ROC curves with respect to the image types and (b) The ROC curve for the NIRmodality based uni-modal system.

Table 1 : Open set uni-modal identification systems performancesGRAY BLUE RED GREEN NIR

𝑇𝑜 FAR FRR 𝑇𝑜 FAR FRR 𝑇𝑜 FAR FRR 𝑇𝑜 FAR FRR 𝑇𝑜 FAR FRR

0.0100 0.043 9.722 0.0001 0.002 1.194 0.0001 0.008 3.000 0.0251 0.245 16.306 0.0100 0.001 0.611

0.1272 2.949 2.949 0.1221 0.139 0.139 0.1342 0.674 0.674 0.1251 7.709 7.709 0.1152 0.056 0.056

0.2000 11.608 1.611 0.3000 5.712 0.056 0.2500 7.583 0.111 0.1800 19.649 5.500 0.2000 0.4322 0.000

the remaining nine images were used for testing. In the follow-ing tests, we setup a database with size of 400 classes, whichare similar to the number of employees in small to mediumsized companies. Thus, the client experiments were performedby comparing nine test images with the corresponding classin the database. A total of 3600 comparisons were made. Theimpostor experiments were performed by comparing the nineimages with each class in the database. A total of 718200impostor experiments were made.

In an identification mode the system examines whether theuser is one of enrolled candidates. Therefore, the biometricdata is collected and compared to all the templates in thesystem database. Identification is closed-set if the person isassumed to exist in the system database. In open-set identi-fication, the person is not guaranteed to exist in the systemdatabase. In our work, the proposed method was tested forthe two modes of identification.

1) Infrared Versus Each Spectrum: At the first stage, weconducted several experiments to investigate the effectivenessof the infrared palmprint. For this, experiment was conductedusing different palmprint modalities (Gray, Red, Green, Blueand infrared). Our goal is to choose the modality yield the bestperformance. Therefore, by varying the palmprint modalitieswe can choose the modality which minimize the systems EER.Thus, in the case of open set identification, the ROC, which isa plot of FRR against FAR, curves for five distinct modalitiesare shown in Fig. 3.(a). This figure compares the identificationperformance of the system varying palmprint modalities. Afteranalyzing this figure we were able to conclude that the infraredpalmprint based system achieved the best performance, it canachieve an EER equal to 0.056 % at a threshold 𝑇𝑜 = 0.1152.Poor results are obtained when using Green modality, in thiscase the system work with an EER equal to 7.709 % at a𝑇𝑜 = 0.1251. the ROC curve for the best case is displayed

in Fig. 3.(b). Compared with other existing unimodal systems,the proposed open set identification system has achieves betterresults expressed in terms of the EER. Finally, Table 1 presentthe experiments results obtained for all modalities in the caseof open set identification system.

2) Palmprint Based Multimodal Systems: The aim of thissection is to investigate whether the system performance couldbe improved by using the integration or fusion of informationfrom each modality for the color image (Red, Green and Blue).Therefore, information presented by different modalities isfused to make the system efficient using two fusion methods:image level and matching score level.

a) Fusion at Image Level: Image fusion is the process bywhich two or more images (modalities) are combined into asingle image. For that, a series of experiments were carried outusing the multi-spectral palmprint database to selected the bestfusion technique (described in section VI.A) that minimize theEER. However, in order to see the performance of the openset identification system, we usually give, in Fig. 4.(a), theresults for all the fusion techniques. Thus, the result suggeststhat the fusion of RGB (Red, Green and Blue) with contrasttechnique has performed better than others (EER = 0.204 %and 𝑇𝑜 = 0.1280). Finally, in order to see the performance ofthe open set identification system, we usually present, in Table2, the results for all spectrums modalities.

b) Fusion at Matching Score Level: To validate our ideawe have run other tests for the multiple modalities basedidentification system. Thus, the individual matching scoresfrom all sub-systems are combined to generate a single scalarscore, which is then used to make the final decision. Duringthe system design we experimented five different matchingscore fusion schemes (described in section VI.B). Therefore,for the open set identification system, an experimental resultat the EER point is shown in Fig. 4.(b). The experimental

Page 5: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

5

(𝑎) (𝑏)

Fig. 4. Multi-modal palmprint open set identification test results. (a) Fusion of RGB combination at image level and (b) Fusion of RGB combination atmatching score level.

Table 2 : Open set multi-modal identification systems performances (Image fusion)

COMBINATIONDWT PCA CONTRAST LAPLACIAN GRADIANT

𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER

RGB 0.1231 0.470 0.1257 1.153 0.1280 0.204 0.1286 0.226 0.1140 1.028

Table 3 : Open set multi-modal identification systems performances (Matching score fusion)

COMBINATIONSUM WHT MIN MAX MUL

𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER 𝑇𝑜 EER

RGB 0.1228 0.088 0.1381 0.110 0.0225 0.153 0.2099 1.028 0.0003 0.183

results show that SUM rule based fusion scheme get the bestperformance with a minimum EER equal to 0.088 % at thethreshold 𝑇𝑜 = 0.1228, this performance work at a minimumEER equal to 0.110 % and 𝑇𝑜 = 0.1381 when the WHT fusionrule is used. The MIN rule give an EER equal to 0.153 at thethreshold 𝑇𝑜 = 0.0225. The MUL and MAX rules provide anEER equal to 0.183 % and 1.028 % (with a threshold 𝑇𝑜 =0.0003 and 𝑇𝑜 = 0.2099, respectively). Finally, all experiments,in term of EER, are described in table 3.

3) Infrared Versus Multimodal Systems: In order to arguethe response to be obtained for the previous question, we havedeveloped a comparative study between the implemented sys-tems (infrared palmprint based unimodal identification systemand color palmprint based multimodal identification system).Fig. 5.(a) compares the performance of these systems (ROCcurve). From these results it is clear that the identificationbased on the infrared palmprint easily outperforms the openset identification system. In this case, the open set identifi-cation system performance can be improved to over 36 %with respect to the fusion at matching score level (0.088 %)and by 72 % with respect to the fusion at image level (0.204%). The results expressed as a FAR and FRR dependingon the threshold and the distance distributions of genuineand impostor matchings obtained by the proposed scheme, ifthe infrared palmprint is used, are plotted in Fig. 5.(b) andFig. 5.(c), respectively.

4) Closed Set Identification System: In the case of a closedset identification, a series of experiments were also carriedout to select the best palmprint modalities. This has beenperformed by comparing all modalities to determine whichmodality gives the best identification rate.

Table 4 presents the experimental results obtained for the

unimodal systems. From Table 4, the best result of Rank-OneRecognition (ROR) produces an accuracy of 99.306 % withlowest Rank of Perfect Recognition (RPR) of 54 in the caseof infrared palmprint.

In the case of using color palmprints, Table 5 shows theresults obtained, for the fusion at image level, to determinethe best fusion rules having the highest ROR among all rules.This Table clearly shows that the contrast rule offers the bestresults (ROR = 98.167 % and RPR = 57). The system canoperate at a ROR of 98.000 with RPR = 64 in the case oflaplacian method. Finally, gradiant method produces a poorperformance (ROR = 93.722 % and RPR = 200).

We have also performed a closed set identification scenarioby applying all fusion rules on the matching scores obtainedfrom the color palmprint modalities and calculated the RORand RPR as shown in Table 6.

From this table, it can be seen that WHT fusion ruleperforms better than the other rules, in this case the systemcan work at a ROR of 99.139 with RPR = 249. Also, onecan observe that SUM rule followed by MAX rule fusion canproduce the best performance. Finally, MIN and MUL rulesproduces a poor performance.

Finally, through an analysis of the previous results, it canbe observed that in general the performance of the uni-modalclosed set identification system is significantly improved byusing the infrared palmprint modality.

VIII. CONCLUSION AND FURTHER WORK

In this paper, several biometric systems for person iden-tification using palmprint images are proposed. We havedemonstrated, through the obtained results of all systems and

Page 6: [IEEE 2013 18th International Conference on Digital Signal Processing (DSP) - Fira (2013.4.27-2013.4.30)] 2013 Saudi International Electronics, Communications and Photonics Conference

6

(𝑎) (𝑏) (𝑐)

Fig. 5. Open set identification system performance. (a) Comparison between NIR modality and RGB combination, (b) The genuine and impostor distributionand (c) The dependency of the FAR and the FRR on the value of the threshold.

Table 4 : Closed set uni-modal identification systems performancesGRAY BLUE RED GREEN NIR

ROR RPR ROR RPR ROR RPR ROR RPR ROR RPR

89.167 366 98.806 262 96.972 238 79.000 388 99.306 54

Table 5 : Closed set multi-modal identification systems performances (Image fusion)

COMBINATIONDWT PCA CONTRAST LAPLACIAN GRADIANT

ROR RPR ROR RPR ROR RPR ROR RPR ROR RPR

RGB 96.500 164 94.139 244 98.167 57 98.000 64 93.722 200

Table 6 : Closed set multi-modal identification systems performances (Matching score fusion)

COMBINATIONSUM WHT MIN MAX MUL

ROR RPR ROR RPR ROR RPR ROR RPR ROR RPR

RGB 98.972 234 99.139 249 87.694 49 96.139 341 87.694 17

by establishing a comparison when using palmprint imageas inputs, that the infrared palmprint images are reliable forefficient person identification. For further improvement of thesystem, our future work will focus on the performance evalua-tion using a large size database, and a combination of infraredpalmprint information with other biometrics such as infraredface to obtain higher accuracy identification performances.

REFERENCES

[1] J. Wayman, A. Jain, D. Maltoni and D. Maio, “Biometric Systems,Technology, Design and Performance Evaluation”, Springer, London,2005.

[2] N. V. Boulgouris, K. N. Plataniotis and E. Micheli-Tzanakou, “Bio-metrics: Theory, Methods, and Applications”, David B. Fogel, SeriesEditor, Willy publisher, IEEE Press and IEEE Press on ComputationalIntelligence, 2010.

[3] Ajay Kumar, David Zhang, “Improving Biometric Authentication Perfor-mance from the User Quality”, IEEE transactions on instrumentation andmeasurement, Vol. 59, No. 3, pp. 730-735, 2010.

[4] K Kumar Sricharan, A Aneesh Reddy and A G Ramakrishnan, “Knucklebased Hand Correlation for User Authentication”, Biometric Technologyfor Human Identification III, Proc. of SPIE Vol. 6202, 62020X, 2006.

[5] D. Zhang, G. Lu, W. Li, L. Zhang, and N. Luo, “Three dimensionalpalmprint recognition using structured light imaging”, International Con-ference on Biometrics: Theory, Applications and Systems, pp. 1-6, Sept.2008.

[6] Abdallah Meraoumia, Salim Chitroub and ahmed Bouridane, Fusionof Multi-spectral Palmprint Images for Improved Identification Perfor-mance”, Photonics and Optoelectronics, Vol.1 No. 1, pp. 13-19, 2012.

[7] David Zhang, Zhenhua Guo, Guangming Lu, Lei Zhang, and WangmengZuo, “An Online System of Multispectral Palmprint Verification”, IEEEtransactions on instrumentation and measurement, Vol. 59, No. 2,pp. 480-490, february 2010.

[8] M.S. Bartlett, J.R.Movellan, and T.J. Sejnowski, “Face recognition by in-dependent component analysis”, IEEE Transactions on Neural Networks,13(6):1450-1464, 2002.

[9] A. Meraoumia, S. Chitroub and A Bouridane, “Fusion of Finger-Knuckle-Print and Palmprint for an Efficient Multi- Biometric System of PersonRecognition”, IEEE International Conference on Communications (ICC),Kyoto, japan, pp: 1-5, june 2011.

[10] David Zhang, Zhenhua Guo, Guangming Lu, Lei Zhang, Yahui Liuand Wangmeng Zuo, “Online joint palmprint and palmvein verification”,Expert Systems with Applications, Vol. 38, pp. 26212631, 2011

[11] Audrey Poinsot, Fan Yang and Michel Paindavoine, “Small SampleBiometric Recognition Based on Palmprint and Face Fusion”, FourthInternational Multi-Conference on Computing in the Global InformationTechnology, pp.118-122, 2009.

[12] Akansu, A. N. and Haddad, R. A, “Multiresolution signal decomposi-tion”, Academic Press, New York, 1992.

[13] Mingxing He, Shi-Jinn Horng, Pingzhi Fan, Ray-Shine Run, Rong-Jian Chen, Jui-Lin Lai, Muhammad Khurram Khan and Kevin OctaviusSentosa, “Performance evaluation of score level fusion in multimodalbiometric systems”, Pattern Recognition Vol 43, pp: 17891800, 2010.

[14] The Hong Kong Polytechnic University (PolyU) Multispectral PalmprintDatabase. Available at: http://www.comp.polyu.edu.hk/ biometrics/ Mul-tispectralPalmprint/MSP.htm.

[15] Connie, T., Teoh, A., Goh, M., Ngo, D., “Palmprint Recognition withPCA and ICA”, Conference of Image and Vision Computing New Zealand2003, pp. 227232, 2003.

[16] A. K. Jain, A. Ross and S. Prabhakar, “An Introduction to BiometricRecognition”, IEEE Transactions on Circuits and Systems for VideoTechnology, Vol. 14, No 1, pp. 4-20, Jan 2004.