Deep Palmprint Recognition with Alignment and Augmentation of Limited Training Samples
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/440249 , vital:73760 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/440249 , vital:73760 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
Deep palmprint recognition with alignment and augmentation of limited training samples
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464074 , vital:76473 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2022
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/464074 , vital:76473 , xlink:href="https://doi.org/10.1007/s42979-021-00859-3"
- Description: This paper builds upon a previously proposed automatic palmprint alignment and classification system. The proposed system was geared towards palmprints acquired from either contact or contactless sensors. It was robust to finger location and fist shape changes—accurately extracting the palmprints in images without fingers. An extension to this previous work includes comparisons of traditional and deep learning models, both with hyperparameter tuning. The proposed methods are compared with related verification systems and a detailed evaluation of open-set identification. The best results were yielded by a proposed Convolutional Neural Network, based on VGG-16, and outperforming tuned VGG-16 and Xception architectures. All deep learning algorithms are provided with augmented data, included in the tuning process, enabling significant accuracy gains. Highlights include near-zero and zero EER on IITD-Palmprint verification using one training sample and leave-one-out strategy, respectively. Therefore, the proposed palmprint system is practical as it is effective on data containing many and few training examples.
- Full Text:
- Date Issued: 2022
Improved palmprint segmentation for robust identification and verification
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460576 , vital:75966 , xlink:href="https://doi.org/10.1109/SITIS.2019.00013"
- Description: This paper introduces an improved approach to palmprint segmentation. The approach enables both contact and contactless palmprints to be segmented regardless of constraining finger positions or whether fingers are even depicted within the image. It is compared with related systems, as well as more comprehensive identification tests, that show consistent results across other datasets. Experiments include contact and contactless palmprint images. The proposed system achieves highly accurate classification results, and highlights the importance of effective image segmentation. The proposed system is practical as it is effective with small or large amounts of training data.
- Full Text:
- Date Issued: 2019
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2019
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460576 , vital:75966 , xlink:href="https://doi.org/10.1109/SITIS.2019.00013"
- Description: This paper introduces an improved approach to palmprint segmentation. The approach enables both contact and contactless palmprints to be segmented regardless of constraining finger positions or whether fingers are even depicted within the image. It is compared with related systems, as well as more comprehensive identification tests, that show consistent results across other datasets. Experiments include contact and contactless palmprint images. The proposed system achieves highly accurate classification results, and highlights the importance of effective image segmentation. The proposed system is practical as it is effective with small or large amounts of training data.
- Full Text:
- Date Issued: 2019
Efficient Biometric Access Control for Larger Scale Populations
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2018
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465667 , vital:76630 , xlink:href="https://www.researchgate.net/profile/Dane-Brown-2/publication/335378829_Efficient_Biometric_Access_Control_for_Larger_Scale_Populations/links/5d61159ea6fdccc32ccd2c8a/Efficient-Biometric-Access-Control-for-Larger-Scale-Populations.pdf"
- Description: Biometric applications and databases are growing at an alarming rate. Processing large or complex biometric data induces longer wait times that can limit usability during application. This paper focuses on increasing the processing speed of biometric data, and calls for a parallel approach to data processing that is beyond the capability of a central processing unit (CPU). The graphical processing unit (GPU) is effectively utilized with compute unified device architecture (CUDA), and results in at least triple the processing speed when compared with a previously presented accurate and secure multimodal biometric system. When saturating the CPU-only implementation with more individuals than the available thread count, the GPU-assisted implementation outperforms it exponentially. The GPU-assisted implementation is also validated to have the same accuracy of the original system, and thus shows promising advancements in both accuracy and processing speed in the challenging big data world.
- Full Text:
- Date Issued: 2018
Enhanced biometric access control for mobile devices
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465678 , vital:76631
- Description: In the new Digital Economy, mobile devices are increasingly 978-0-620-76756-9being used for tasks that involve sensitive and/or financial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465678 , vital:76631
- Description: In the new Digital Economy, mobile devices are increasingly 978-0-620-76756-9being used for tasks that involve sensitive and/or financial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
Feature-fusion guidelines for image-based multi-modal biometric fusion
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460063 , vital:75889 , xlink:href="https://doi.org/10.18489/sacj.v29i1.436"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a newapproach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprintand palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed inour recent work, are extended by adding a new face segmentation method and the support vector machine classifier.The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vectormachine classifier combined with the new feature selection approach, proposed in our recent work, outperforms otherclassifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknessesas observed in the applied feature processing modules during preliminary experiments. The guidelines are used toimplement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducingthe EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face,MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/460063 , vital:75889 , xlink:href="https://doi.org/10.18489/sacj.v29i1.436"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a newapproach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprintand palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed inour recent work, are extended by adding a new face segmentation method and the support vector machine classifier.The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vectormachine classifier combined with the new feature selection approach, proposed in our recent work, outperforms otherclassifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknessesas observed in the applied feature processing modules during preliminary experiments. The guidelines are used toimplement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducingthe EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face,MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
Feature-fusion guidelines for image-based multi-modal biometric fusion
- Brown, Dane L, Bradshaw, Karen L
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465689 , vital:76632 , xlink:href="https://hdl.handle.net/10520/EJC-90afb1388"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
- Authors: Brown, Dane L , Bradshaw, Karen L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , article
- Identifier: http://hdl.handle.net/10962/465689 , vital:76632 , xlink:href="https://hdl.handle.net/10520/EJC-90afb1388"
- Description: The feature level, unlike the match score level, lacks multi-modal fusion guidelines. This work demonstrates a new approach for improved image-based biometric feature-fusion. The approach extracts and combines the face, fingerprint and palmprint at the feature level for improved human identification accuracy. Feature-fusion guidelines, proposed in our recent work, are extended by adding a new face segmentation method and the support vector machine classifier. The new face segmentation method improves the face identification equal error rate (EER) by 10%. The support vector machine classifier combined with the new feature selection approach, proposed in our recent work, outperforms other classifiers when using a single training sample. Feature-fusion guidelines take the form of strengths and weaknesses as observed in the applied feature processing modules during preliminary experiments. The guidelines are used to implement an effective biometric fusion system at the feature level, using a novel feature-fusion methodology, reducing the EER of two groups of three datasets namely: SDUMLA face, SDUMLA fingerprint and IITD palmprint; MUCT Face, MCYT Fingerprint and CASIA Palmprint.
- Full Text:
- Date Issued: 2017
Improved Automatic Face Segmentation and Recognition for Applications with Limited Training Data
- Bradshaw, Karen L, Brown, Dane L
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460085 , vital:75891 , ISBN 9783319582740 , https://doi.org/10.1007/978-3-319-58274-0_33
- Description: This paper introduces varied pose angle, a new approach to improve face identification given large pose angles and limited training data. Face landmarks are extracted and used to normalize and segment the face. Our approach does not require face frontalization and achieves consistent results. Results are compared using frontal and non-frontal training images for Eigen and Fisher classification of various face pose angles. Fisher scales better with more training samples only with a high quality dataset. Our approach achieves promising results for three well-known face datasets.
- Full Text:
- Date Issued: 2017
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460085 , vital:75891 , ISBN 9783319582740 , https://doi.org/10.1007/978-3-319-58274-0_33
- Description: This paper introduces varied pose angle, a new approach to improve face identification given large pose angles and limited training data. Face landmarks are extracted and used to normalize and segment the face. Our approach does not require face frontalization and achieves consistent results. Results are compared using frontal and non-frontal training images for Eigen and Fisher classification of various face pose angles. Fisher scales better with more training samples only with a high quality dataset. Our approach achieves promising results for three well-known face datasets.
- Full Text:
- Date Issued: 2017
“Enhanced biometric access control for mobile devices,” in Proceedings of the 20th Southern Africa Telecommunication Networks and Applications Conference
- Bradshaw, Karen L, Brown, Dane L
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460025 , vital:75885 , ISBN 9780620767569
- Description: In the new Digital Economy, mobile devices are increasingly being used for tasks that involve sensitive and/or f inancial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
- Authors: Bradshaw, Karen L , Brown, Dane L
- Date: 2017
- Subjects: To be catalogued
- Language: English
- Type: text , book
- Identifier: http://hdl.handle.net/10962/460025 , vital:75885 , ISBN 9780620767569
- Description: In the new Digital Economy, mobile devices are increasingly being used for tasks that involve sensitive and/or f inancial data. Hitherto, security on smartphones has not been a priority and furthermore, users tend to ignore the security features in favour of more rapid access to the device. We propose an authentication system that can provide enhanced security by utilizing multi-modal biometrics from a single image, captured at arm’s length, containing unique face and iris data. The system is compared to state-of-the-art face and iris recognition systems, in related studies using the CASIA-Iris-Distance dataset and the IITD iris dataset. The proposed system outperforms the related studies in all experiments and shows promising advancements to at-a-distance iris recognition on mobile devices.
- Full Text:
- Date Issued: 2017
- «
- ‹
- 1
- ›
- »