Human identification has generally been carried out by utilizing ID cards and passwords. Such techniques can be violated easily. Passwords can be guessed and ID cards can be robbed, rendering them temperamental [Jain et al. 2006].
< Jain, A., Bolle, R., & Pankanti, S. (Eds.). (2006). Biometrics: personal identification in networked society (Vol. 479). Springer Science & Business Media.>
Biometrics is defined as a process of identifying a person identity on the basis of physiological and behavioral features; it is capable of differentiating between the genuine person and the fake one. A biometric system is that system which identifies data of the person, by fetching the set of features and verifying it with the set stored in the database, which is dependent on the application context. An individual’s identity can be found out in two ways: Identification and verification. Working of biometric system is divided into eight stages (Liu and Silverman 2001). First, capturing the biometrics. Second, processing, enrolling and extraction of biometric template. Third, storing the data in tokens such as smart card. Fourth, lively scan the selected biometric. Fifth, processing of biometric and extraction of biometric template. Sixth, matching of biometric with the stored one. Seventh, matching score is being provided to the business application. Eighth, recording of correct audit train in regards to the system. Processing of biometric system is shown below:
< Liu, Simon, and Mark Silverman. “A practical guide to biometric security technology.” IT Professional 3.1 (2001): 27-32.>
Figure 1How a biometric system work
A biometric system is basically an example of acknowledgment framework that works by obtaining biometric information from an individual, processing the procured information, and comparing it against the layout set in the database. Based on the application setting, a biometric framework may work either in check mode or distinguishing proof model.
- In the mode of verification, the system approves an individual’s identity by comparing the captured biometric features with the individual’s biometric template stored in the framework database. In this system, a person who aims to be positively identified claims a factor – generally by means of an individual distinguishing proof number (PIN) or a user name – then the framework system conducts a one-to-one comparison to decide whether the claim is true or not. This mode aims to answer the question “is this person whom he/she claims to be”.
Figure 2 diagrams of verification task
- In the mode of identification, the individual’s identity is recognized by looking through template of all the users in the database for a match. Hence a system conducts a one -to- many comparisons to establish an individual’s identity. The identification mode classifies and identifies some unknown identity. It aims to answer questions such as “who is this person”. Distinguishing proof is a basic part in negative acknowledgment applications where the framework builds up whether the individual is who she denies being. The motivation behind negative acknowledgment is to keep an individual from utilizing numerous identities. Identification may also be utilized as a part of positive acknowledgment for convenience (where the induvial is not required to prove an identity). While the traditional methods of personal recognition such as passwords and tokens work for positive recognition, only biometrics can be used for negative recognition (SALIL, P., SHARATH, P, and ANIL, K. 2003).
< Prabhakar, Salil, Sharath Pankanti, and Anil K. Jain. “Biometric recognition: Security and privacy concerns.” IEEE security & privacy 99.2 (2003): 33-42.>
Figure 3 diagrams of Identification task
One of the most studied research topics in computer vision is face recognition. After many years of research, conventional face recognition using visual light under controlled and homogeneous conditions is approaching a mature technology . It has been being deployed at industrial scale for biometric border control  and it is achieving a better performance than human 
 W. Zhao, R. Chellappa, P.J. Phillips, A. Rosenfeld, Face recognition: a literature survey, J. ACM Comput. Surv. (CSUR) (2003) 399–458.
 Frontex, BIOPASS II: automated biometric border crossing systems based on electronic passports and facial recognition: RAPID and SmartGate, 2010.
 Y. Sun, Y. Chen, X. Wang, X. Tang, Deep learning face representation by joint identification-verification, NIPS (2014)
In 1966, Bledsoe proposed a face recognition methods strategies that have modeled and classified faces based on a couple of factors such as normalized distances, rations among feature point.
Bledsoe,W.W.: Man-Machine Facial Recognition: Report on a Large-Scale Experi- ment. Technical Report PRI-22. Panoramic Recsearch Inc. Califormia (1966)
In 1979, Toshiyuki, Nagao and Kanade used simple heuristic and anthropometric techniques for face recognition where the feature points of human face such as noise, eyes and mouth are extracted.
< Sakai, Toshiyuki, Makoto Nagao, and Takeo Kanade. Computer analysis and classification of photographs of human faces. Kyoto University, 1972.>
In 1991, Turk, Matthew, and Alex presented a Face recognition using Eigen-faces Where a framework of the detection and the identification of human faces is proposed. In this work, the system was able to recognize faces automatically without any guidance.
< Turk, Matthew A., and Alex P. Pentland. “Face recognition using eigenfaces.” Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., IEEE Computer Society Conference on. IEEE, 1991.>
In 2005, Takeshi, Kaneko and Hori proposed Joint Haar-like Features for Face Detection. In this experiment, the detector yields higher classification performance and reduces the error by 37% in comparison with Viola and Jones’ detector. The detector is also 2.6 times as fast as Viola and Jones’ detector to achieve the same performance.
<P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. Proc. of CVPR, pages 511–518, 2001.
T. Mita, T. Kaneko, O. Hori, Joint Haar-like Features for Face Detection, “Proceedings of the Tenth IEEE International Conference on Computer Vision”, 1550- 5499/05 ©2005 IEEE.>
In 2012, Faizan, Aaima and Zeeshan conducted experiment on Image-based Face Detection and Recognition to evaluate face detection and recognition methods and provide complete solution for image based face detection and recognition.
A recent study, proposed a novel facial feature extraction method called Gabor Wavelets, PCA and SVM. This method integrates the distinctiveness of Gabor features although the Gabor representations were largely used. They claimed that the proposed method is the best one to handle illumination, Poses, and expression.
<Bhimanwar, Chinmay, et al. “Face Identification.” International Journal of Engineering Science 11923 (2017).>
Voice Recognition is a technology that allows users to use their voice as an input device. Voice recognition may be used to instruct and give commands to the computer such as opening application programs. In the old voice recognition, each word needs to be separated by a distinct space in order to be recognized by the applications. However, in newer voice recognition, applications allow a user to give commands fluently into the computer and these applications can recognize speech at up to 160 words per minute. some applications are designed to recognize text and format it in order to allow continuous speech.
Voice recognition uses a neural net to learn and recognize human’s voice and remember the way human says every word. This customization allows the voice recognition to distinguish among humans’ voice although each person speaks with different accent and inflection.
Since the 1930s, a system model for speech analysis and synthesis was proposed by Homer Dudley of Bell Laboratories. They built a system for single-speaker digit recognition. This system operated by locating the formants in the power spectrum of each utterance.
< In 1952 three Bell Labs researchers built a system for single-speaker digit recognition. Their system worked by locating the formants in the power spectrum of each utterance >
In 1975, Itakura proposed fundamental pattern recognition technology to speech recognition based on LPC methods.
<F. Itakura, Minimum Prediction Residual Principle Applied to Speech Recognition, IEEE Trans. Acoustics, Speech and Signal Proc., Vol. ASSP-23, pp. 57-72, Feb. 1975>
In 1976, Reddy provided a comprehensive review of the state of the art of voice recognition at that time. This comprehensive review helps a non-expert in the field to understand the early history of voice recognition.
<Reddy, Dabbala Rajagopal. “Speech recognition by machine: A review.” Proceedings of the IEEE 64.4 (1976): 501-531.>
In 1990, the first commercial speech recognition product was launched by Dragon. This system recognized continuous speech at about 100 words per minute.
< Pinola, Melanie. “Speech Recognition Through the Decades: How We Ended Up With Siri”. PC World. Retrieved 30 July 2017>
In in 1992, the Voice Recognition Call Processing service was deployed by AT&T to route telephone calls without the use of a human operator.
< J. G. Wilpon and D. B. Roe, AT&T Telephone Network Applications of Speech Recognition, Proc. COST232 Workshop, Rome, Italy, Nov. 1992.>
Sphinx introduced large vocabulary speaker-independent continuous speech recognition. The key was to use more speech data from a large number of speakers to train the HMM-based system.
<Lee, Chin-Hui, and Qiang Huo. “On adaptive decision rules and decision parameter adaptation for automatic speech recognition.” Proceedings of the IEEE 88.8 (2000): 1241-1269.>
In 2015, Google’s speech recognition reported that they experienced a dramatic performance jump of 49% using Connectionist Temporal Classification-trained Long Short-Term Memory, which is now available to all smartphone users through Google Voice.
<Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk (September 2015): Google voice search: faster and more accurate.>
Hand Geometry Recognition
In 2000, Sanchez, Raul, Carmen and Ana interduced biometric identification using hand geometry measurements. In this experiment, 10 photographs were captured collected from 20 people on different days throughout three months. They extracted 31 hand features from a color photograph which are grouped as 21 widths, three heights, four deviations and three angle. They obtained up to 97 percent of success in classification and error rates less than 10 percent in verification.
< Sanchez-Reillo, Raul, Carmen Sanchez-Avila, and Ana Gonzalez-Marcos. “Biometric identification through hand geometry measurements.” IEEE Transactions on pattern analysis and machine intelligence 22.10 (2000): 1168-1171.>
In 2007, Peter and Levicky proposed biometric security system using hand geometry. In this experiment, 408 hand images were collected from 24 people of young ages. The extracted features were 21 elements from a hand like finger length, heights, palm, area, etc. They obtained EER of 4.62% and FAR of 0.1812% using Euclidean distance, Gaussian mixture model(GMM) and Hamming distance.
< Varchol, Peter, and Dusan Levicky. “Using of hand geometry in biometric security systems.” RADIOENGINEERING-PRAGUE- 16.4 (2007): 82.>
In 2008, Miguel Adan and et.al introduced biometric system for verification and identification purposes based on natural hand layout. In this study, 5640 pictures were collected from 470 users of young ages. The features were extracted from the image of the hand such as the hand size, the finger size, the finger length and the crestpoints of the fingers, etc. They obtained an accuracy ratio of 97.58% on average and EER of 1.3%.
< Adán, Miguel, et al. “Biometric verification/identification based on hands natural layout.” Image and Vision Computing 26.4 (2008): 451-465.>
In 2009, Ferrer, Miguel and A., et studied how image resolution affect the biometric system based on hand geometry. In this experiment using two databases were used. The first one is underhand database which was collected and other one is overhand database and capture 10 different images of 85 users. They extracted 15 features such as Fingers end, fingers valley and exterior base of thumb, index and little finger located. They conclude that the input image resolution could be reduced up to 72dpi without loss of performance using SVM.
< Ferrer, Miguel A., et al. “Hand geometry identification system performance.” Security Technology, 2009. 43rd Annual 2009 International Carnahan Conference on. IEEE, 2009.>
In 2009, Wei-Chang, Wang and Shih introduced method of biometrics by joining hang geometry and palm print triggered by morphology. In this experiment, 1,560 hand images were captured and collected from 260 different hands. They applied the concept of morphology and Voronoi diagrams for feature extraction. They obtained FAR of 0.0035% and FRR % 5.7692% using linear support vector machine.
< Wang, Wei-Chang, Wen-Shiung Chen, and Sheng-Wen Shih. “Biometric recognition by fusing palmprint and hand-geometry based on morphology.” Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on. IEEE, 2009.>
In 2010, Bahareh. Agile, et.al introduced approach to personal verification and identification based on four fingers geometry. In this experiment, 500 pictures were collected from 50 users. They extracted 20 features from 4 fingers little, ring, middle and index finger. They obtained EER of 0.1743% and an accuracy of 99.81% in identification using absolute distance classifier.
< Aghili, Bahareh, and Hamed Sadjedi. “Personal identification/verification by using four fingers.” Image and Signal Processing (CISP), 2010 3rd International Congress on. Vol. 6. IEEE, 2010.>
Guo, Hsia, Liu, Chu and Le proposed and approach for personal identification using hand geometrical features to improve the usability of the hand recognition system using infrared illumination device. In this experiment, 6000 hand images captured from 100 different people and 60 images for each person were used. They extracted 34 features are extracted from all the fingers for palm identification. They obtained a competitive average Correct Identification Rate (CIR) of 96.23% and (FAR) of 1.85% on average using LibSVM classifier
<Guo, Jing-Ming, et al. “Contact-free hand geometry-based identification system.” Expert Systems with Applications 39.14 (2012): 11728-11736.>
Nascimento, Marcia, Leonardo and Nicomedes conducted comparison of various learning algorithms which are used for recognition based on hand geometry. In this experiment, 678 images captured from 97 people were used. For each hand, 54 features were extracted. This experiment showed competitive results when compared to other state-of-the-art methods; for example, for classification which is using cross-validation applied to SMO and BayesNet classifiers with an accuracy of 99.85%.
< do Nascimento, Marcia VP, Leonardo Vidal Batista, and Nicomedes L. Cavalcanti. “Comparative study of learning algorithms for recognition by hand geometry.” Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on. IEEE, 2014.>
In 2015, Silva , A. G. A., et al studied how to improving Equal Error Rate (EER) performance for biometric authentication by hand geometry a Genetic Algorithm-based. In this experiment, 1200 images divided into 100 different persons. Each person has 12 hand images, 7 of 12 images are dorsal, and the other remaining 5 are palm. 54 features were extracted and used as an attributes vector for classification. They obtained an equal error rate up to 0% and 0.01% in the training set and the test set respectively using the genetic algorithm. Moreover, they achieved a relative enhancement of 90.91% using the genetic algorithm for the test set in the best scenario.
< Silva, A. G. A., et al. “Analysis of the Performance Improvement Obtained by a Genetic Algorithm-based Approach on a Hand Geometry Dataset.” Proceedings on the International Conference on Artificial Intelligence (ICAI). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), 2015.>
In 2017, song, Cai and Zhang interdicted a simple and reliable authentication method for mobile devices using hand geometry and behavioral information. In this proposed system, a user is authenticated by combining both geometry information and behavioral characteristics. In this experiment, 161 subjects were recruited and asked to perform various TFST gestures.
< Y. Song, Z. Cai and Z. L. Zhang, “Multi-touch Authentication Using Hand Geometry and Behavioral Information,” 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, 2017, pp. 357-372.
The ear is one type of biometrics which has its unique characters which are the richness and stability of the ear’s structure, immutability in its form with expressions, and the uniform distribution of color
< Hurley, David J., Mark S. Nixon, and John N. Carter. “Force field feature extraction for ear biometrics.” Computer Vision and Image Understanding98.3 (2005): 491-512>
In the early 1890s, Bertillon introduced the potential of the human ear for personal identification. In his study on biometrics he said:
“The ear, thanks to these multiple small valleys and hills which furrow across it, is the most significant factor from the point of view of identification. Immutable in its form since birth, resistant to the influences of environment and education, this organ remains, during the entire life, like the intangible legacy of heredity and of the intra-uterine life”.
<A. Bertillon. La photographie judiciaire, avec un appendice sur la classification et l’identification anthropometriques. Gauthier-Villars, Paris, 1890.>
In 1997, Mark and Wilhelm introduced the first attempt to build an ear recognition system. A mathematical graph model was used to represent and match the edges and curves in a 2D ear image.
<Burge, Mark, and Wilhelm Burger. “Ear biometrics for machine vision.” 21st Workshop of the Austrian Association for Pattern Recognition. 1997.>
In 1999, Moreno, Sanchez and Vélez discussed the idea of using outer ear images to build an ear recognition system. In this experiment, a dataset of 168 images was used. The extracted features are outer ear points, information obtained from ear shape and wrinkles, and macro using compression network. They obtained a recognition rate of 93% using neural network.
<Moreno, Belé, Angel Sanchez, and José F. Vélez. “On the use of outer ear images for personal identification in security applications.” Security Technology, 1999. Proceedings. IEEE 33rd Annual 1999 International Carnahan Conference on. IEEE, 1999.>
In 2002, Yuizono developed an Ear recognition system using genetic search. In the experiment, 660 images were collected from 110 persons. They obtained 100% of the recognition rate for the registered persons, and 100% of rejection rate for unknown persons.
< Yuizono, Takaya, et al. “Study on individual recognition for ear images by using genetic local search.” Evolutionary Computation, 2002. CEC’02. Proceedings of the 2002 Congress on. Vol. 1. IEEE, 2002>
In 2005, Hurley et al. proposed a new force field transformation to reduce the dimensionality of the original pattern space, and to maintain discriminatory power for classification at the same time. In this experiment, the XM2VTS face profiles database, which consists of 252 images collected from 63 subjects, was used. They obtained a recognition rate of 99.2%.
<Hurley, David J., Mark S. Nixon, and John N. Carter. “Force field feature extraction for ear biometrics.” Computer Vision and Image Understanding98.3 (2005): 491-512.>
In 2005, Chen and Bhanu proposed a 3-D ear recognition system. In this proposed system, two step Iterative Closest Point procedures were introduced for the purpose of matching 3D ears. In the first step, the model ear helix is aligned. In the second step. In the second-step, the transformation is refined to determine if the match is good. In this experimental, they used a dataset of 30 3-D ear images collected from 30 users. They obtained an equal-error rate of 6.7%.
< Chen, Hui, and Bir Bhanu. “Contour matching for 3D ear recognition.” Application of Computer Vision, 2005. WACV/MOTIONS’05 Volume 1. Seventh IEEE Workshops on. Vol. 1. IEEE, 2005.>
In 2009, Yuan, Li, and Feng introduced an ear detection approach with two stages: off-line cascaded classifier training and on-line ear detection. In this experiment, USTB ear, CASPEAL face and UMIST face databases were used. They used 18 layers cascaded classifier to train and detect ears. They obtained a FRR of 1.61% and a FAR of 2.53%.
<Yuan, Li, and Feng Zhang. “Ear detection based on improved adaboost algorithm.” Machine Learning and Cybernetics, 2009 International Conference on. Vol. 4. IEEE, 2009.>
In 2015, Benzaoui, Amir, Hezil and Boukrouche improved the ear recognition system. They implemented a feature extraction approach for automated the 2D ear recognition using the local texture descriptors. In this experiment, the IIT Delhi database with two versions was used. The first version consists of 493 images of 125 different users. The second version consists of 793 images collected from 221 users. SVM with linear kernel and SVM with RBF kernel classifiers were applied to the IIT Delhi databases. They obtained very competitive performance compared to the other state-of-the-art automatic ear recognition systems.
< Benzaoui, Amir, Nabil Hezil, and Abdelhani Boukrouche. “Identity recognition based on the external shape of the human ear.” Applied Research in Computer Science and Engineering (ICAR), 2015 International Conference on. IEEE, 2015.>
In 2017, Arunachalam and Muthukumar, Alagarsamy introduced a powerful ear recognition system using limited phase only correlation algorithm for feature extraction. In this experiment, IIT Delhi Ear database was used for simulation analyses. The proposed BLPOC algorithm deals with a local image deformation in local block matching.
<Arunachalam, Muthukumar, and Santham Bharathy Alagarsamy. “An efficient ear recognition system using DWT & BLPOC.” Inventive Communication and Computational Technologies (ICICCT), 2017 International Conference on. IEEE, 2017.>
The smell of human is unique. This smell is referred to as Body odor. Body odor recognition is a contactless physical biometric that attempt identify the identity of people by analyzing and studying the olfactory properties of their body scent. Special sensors are implemented to capture the smell of human by obtaining the odor from non-intrusive parts of the body such as the back of the hand. Methods of capturing a person’s smell are being explored by Mastiff Electronic Systems
. < A. K. Jain, A. Ross, and S. Prabhakar, “An introduction to biometric recognition,” IEEE Trans. Circuits Syst. Video Technology, Special Issue Image- and Video-Based Biomet., Volume 14, Issue 1, Jan. 2004, pp. 4–20.>
Solid phase micro-extraction (SPME) is a simple solvent-free headspace extraction technique which allows for volatile organic compounds (VOCs) present in the headspace (gas phase above an item) to be sampled at room temperature.
SPME in conjunction with GC/MS has been demonstrated to be a viable route to extract and analyze the VOCs present in the headspace of collected human secretions.
The stink biometrics depends on the way that every human scent is special. The scent is caught by sensors that are fit to acquire the smell from nonintrusive parts of the body, for example, the back of the hand. Strategies for catching a man’s odor are being investigated by Mastiff Electronic Systems. Every human odor is comprised of chemicals known as volatiles. They are separated by the framework and changed over into a layout. The utilization of stink sensors raises the protection issue as the personal stink conveys a lot of touchy individual data. It is conceivable to analyze a few ailments or exercises in the most recent hours (like sex, for instance) by investigating the personal stink.
In 2003, Corrado used human body odor to identify 9 special gases in the mouth of patients diagnosed with lung cancer. He obtained an accurate rate of above 90%.
< Di Natale, Corrado, et al. “Lung cancer identification by the analysis of breath by means of an array of non-selective gas sensors.” Biosensors and Bioelectronics 18.10 (2003): 1209-1218.>
In 2003, Gottfried and Raymond presented investigated human olfactory perception can be identified easily by visual cues. They stated that although human’s nose consists of about 400 smell receptors, there are only three kinds of receptors in the human visual system. They concluded that the odor system is not is not universally standardized.
< Gottfried, Jay A., and Raymond J. Dolan. “The nose smells what the eye sees: crossmodal visual facilitation of human olfactory perception.” Neuron39.2 (2003): 375-386.>
In 2013, the authors investigated the possibility of using the body odor for biometric identification. In this experiment, they captured 728 samples from 13 subjects over 28 sessions. They obtained a recognition rate above 85% on average. They suggested that odor biometric should be used along with other biometric technologies to enhance its effectiveness.
< Rodriguez-Lujan, Irene, et al. “Analysis of pattern recognition and dimensionality reduction techniques for odor biometrics.” Knowledge-Based Systems 52 (2013): 279-289.>
In 2010, Gibbs discussed the Perception and Acceptance of Body Odor recognition. He stated that Body odor can be acquired using an array of chemical sensors sensitive to different organic compounds. He concluded that odor scanning and its security and privacy need to be improved.
< M.D. Gibbs, Biometrics: body odor authentication perception and acceptance, SIGCAS Comput. Soc. 40 (2010) 16.>
In 2014, Inbavalli and Nandhini investigated the possibility of building a body odor recognition system.
< Inbavalli, P., and G. Nandhini. “Body odor as a biometric authentication.” Int. J. Comput. Sci. Inform. Technol 5.5 (2014): 6270-6274.>
In 2014, Shu, Minglei, Liu, and Hua proposed a novel authentication scheme based on Body Odor. In this proposed work, they collected human body odor through gas sensor arrays. for detection and analysis phases, Gas Chromatography & Mass Spectrometry, Principle Component Analysis, Neural Network
< Shu, Minglei, Yunxiang Liu, and Hua Fang. “Identification authentication scheme using human body odour.” Control Science and Systems Engineering (CCSSE), 2014 IEEE International Conference on. IEEE, 2014.>
In 2014, it was stated by Spanish researchers that there are unique patterns of human’s body odor that stayed unchanged. This steady behavior provides an accuracy of 85% for identification. The group of the GB2S of the UPM studied the body odor by analyzing a dataset of 13 people. They found that unique patterns in body odor showed a possible identification with an error rate of 15%.
< Spanish researchers sniff out an emerging biometric modalit, http://www.biometricupdate.com/201402/spanish-researchers-sniff-out-an-emerging-biometric-modality>
Lip based recognition has been less developed than the recognition of other human physical attributes such as the fingerprint, voice patterns, blood vessel patterns, or the face. For this reason, achieved results on this field are still improved and new recognition techniques are searched
In cheiloscopy, the identification of a given person takes place on the basis of an analysis of characteristic features located in the vermilion border.
Lip visual features are divided into 3 categories which are shape based features, appearance based features and combination of both the previous features
< Howell, D., Cox, S., Theobald, B., 2016. Visual units and confusion modelling for automatic lip-reading. Image Vis. Comput. 51, 1–12.>
Some methods have been proposed for lip print examinations. These methods use, inter alia, statistical analyses , Hough Transform , lip shape analyses , Dynamic Time Warping , segmentation , similarity coefficients .
- Kasprzak, J., Leczynska, B.: Cheiloscopy. Human identification on the basis of lip trace (in Polish). KGP, Warsaw, Poland (2001)
- Wrobel, K., Doroz, R., Palys, M.: A method of lip print recognition based on sec- tions comparison. In: IEEE Int. Conference on Biometrics and Kansei Engineering (ICBAKE 2013), Akihabara, Tokyo, Japan, pp. 47–52 (2013)
- Choras, M.: The lip as a biometric. Pattern Analysis And Applications (Springer) 13, 105–112 (2010)
- Porwik, P., Orczyk, T.: DTW and voting-based lip print recognition system. In: Cortesi, A., Chaki, N., Saeed, K., Wierzchon ́, S. (eds.) CISIM 2012. LNCS, vol. 7564, pp. 191–202. Springer, Heidelberg (2012)
- Koprowski, R., Wrobel, Z.: The cell structures segmentation. In: 4th International Conference on Computer Recognition Systems (CORES 05), pp. 569–576 (2005)
- Cha, S.: Comprehensive Survey on Distance/Similarity Measures between Proba- bility Density Functions. International Journal of Mathematical Models and Meth- ods in Applied Sciences 1(4), 300–307 (2007
In 1950s, lip prints as Identification for human was first proposed.
< Ball, J. “The current status of lip prints and their use for identification.” The Journal of forensic odonto-stomatology 20.2 (2002): 43-46.>
In 1998, Wark, Timothy, and Sridha proposed a new type of lip biometric feature extraction for speaker identification.
< Wark, Timothy, and Sridha Sridharan. “A syntactic approach to automatic lip feature extraction for speaker identification.” Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on. Vol. 6. IEEE, 1998.>
In 2002, Kasprzak, and Leczynska stated that a lip print has 1145 unique features which can be used to differentiate between individuals.
< Kasprzak, J., Leczynska, B.: Cheiloscopy. Human identification on the basis of lip trace (in Polish). KGP, Warsaw, Poland (2001)>
In 2002, Gomez et al. proposed a lip biometric identification system based on the shape of lip. In this experiment, they collected 500 face images of 50 subjects over 10 sessions. They only studied and focused on the area around the lips and they used image transform to extract the lips’ shape. Two types of features were extracted. The first set of features were obtained from the polar coordinates of the envelope. The second set of features were samples of the lip envelope height and width. They obtained a classification hit ratio of 96.9% and an Equal Error rate of 0.015 on average.
< Gomez, Enrique, et al. “Biometric identification system by lip shape.” Security Technology, 2002. Proceedings. 36th Annual 2002 International Carnahan Conference on. IEEE, 2002.>
In 2012, Wang et al. investigated the different physiological and behavioral lip features based on their discriminative power in speaker identification and verification. In this experiment, a dataset consisting of 40 subjects was used. During data collection, they asked each subject to repeat the same numbers 3725 in English for ten times. They obtained high identification accuracy over 90% and low verification error rate under 5% using static lip texture feature and the dynamic shape deformation feature.
<Wang, Shi-Lin, and Alan Wee-Chung Liew. “Physiological and behavioral lip biometrics: A comprehensive study of their discriminative power.” Pattern Recognition 45.9 (2012): 3328-3335.>
In 2010, choraś, michał introduced a standard geometrical parameters used in lips biometric system. In this experiment, they captured 114 lower face images from 38 people. The extracted features vector are geometrical parameters, hu moments, central moments, moments of zernike, statistical color features in rgb, yuv and hsv color spaces. They obtained a recognition rate 82%.
<Choraś, Michał. “The lip as a biometric.” Pattern Analysis and Applications13.1 (2010): 105-112.>
In 2012, Liu, Yun-Fu, Lin, and Guo. investigated how the lip impacts the biometrics recognition systems. They proposed a lip recognition system that can work with partial face images. They obtained a correct accept rate of above 98%, and a false accept rate of below 0.066% using a dataset of 29 subjects.
< Liu, Yun-Fu, Chao-Yu Lin, and Jing-Ming Guo. “Impact of the lips for biometrics.” IEEE Transactions on Image Processing 21.6 (2012): 3092-3101.>
In 2015, Wrobel, Krzysztof, Doroz, and Malgorzata proposed an automatic lip personal identification system based on analyzing single features and investigating the similarity between them by comparing the lower and upper bifurcations. In this experiment, 120 lip prints were collected from 30 people. They obtained an EER of approximately 23% as the best result.
< Wrobel, Krzysztof, Rafał Doroz, and Malgorzata Palys. “Lip print recognition method using bifurcations analysis.” Asian Conference on Intelligent Information and Database Systems. Springer, Cham, 2015.>
< Tadeusiewicz, R., 2015. Neural networks as a tool for modeling of biological systems. Bio-Algorithms Med-Systems 11.>
In 2016, Lu, Zhihe, Xiang and Ran investigated the possibility of using the lip texture for person identification. They introduced a new method that models the appearance and the spatial temporal information of lip texture using convolutional neural networks and long short-term memory artificial neural network together. In this experiment, they collected 11,123 videos from 57 subjects. During data collection, subjects were asked to speak the number 123456789 in Chinese. They achieved a recognition rate of 96.01% on average.
<Lu, Zhihe, Xiang Wu, and Ran He. “Person identification from lip texture analysis.” Digital Signal Processing (DSP), 2016 IEEE International Conference on. IEEE, 2016.>
In 2017, Wrobel, Rafal, Porwik, Naruniec and Kowalski proposed an effective lip biometric recognition system using the Probabilistic Neural Network (PNN). In this experiment, Thy used 3 databases. Two public databases namely Multi-PIE Face Dataset and PUT database and a local database consists of 50 both images collected from 5 objects. Feature extraction was based on lip contours and new lip geometrical measurements. They obtained an average classification accuracy of 86.95%, 87.14%, and 87.26%, Multi-PIE Face Dataset and PUT database and a local database respectively.
<Wrobel, Krzysztof, et al. “Using a Probabilistic Neural Network for lip-based biometric verification.” Engineering Applications of Artificial Intelligence 64 (2017): 112-127.>
vascular ECG biometric
<Salloum, Ronald, and C-C. Jay Kuo. “ECG-based biometrics using recurrent neural networks.” Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017.>
the electrocardiogram (ECG), phonocardiogram (PPG), electroencephalogram (EEG), blood volume pressure (BVP) and electromyogram (EMG).
The challenges occur mainly in feature extrac- tion from ECG signals and so in biometric recognitions
A typical normal heartbeat of ECG wave consists of three main components: P wave, QRS complex and T wave.
The electrocardiogram is the process of measuring and recording the various electrical activities of the heart.
In the early 20th century, Williem Einthoven developed the first recording device for Electrocardiogram.
< Sörnmo, Leif, and Pablo Laguna. Bioelectrical signal processing in cardiac and neurological applications. Vol. 8. Academic Press, 2005.>
In 2010, the authors demonstrated that the size of the testing database affects the error rate of a biometric system. As the testing database size increases, the error rate increases.
<Odinaka, Ikenna, et al. “ECG biometrics: A robust short-time frequency analysis.” Information forensics and security (wifs), 2010 ieee international workshop on. IEEE, 2010.>
In 2016, the authors introduced ECG biometric with a new non-fiducial framework using kernel methods and investigated the feature extraction using different dimensionality reduction tactics. The proposed method provided a secure solution to ECG verification system to control a trade-off between False Match and False Nonmatch rates when dealing with highly unknown subjects.
< Hejazi, Maryamsadat, et al. “ECG biometric authentication based on non-fiducial approach using kernel methods.” Digital Signal Processing 52 (2016): 72-86.>
The fiducial feature extraction method proposed by Biel et al.
< L. Biel, O. Petterson, L. Philipson, P. Wide, ECG analysis: a new approach in human identification, IEEE Trans. Instrum. Meas. 50 (3) (2001) 808–812. >
In 2001, Biel, et al investigated the possibility of using ECG as a new biometrics recognition to identify people using only standard 12-lead rest ECG recording. In the experiment, 85 measurement sets were recorded from 20 subjects over 6 weeks. 360 feature were extracted using fiducial feature extraction method and converted to readable format using SIEMENS mega-cart. The extracted features were the local characteristics of the P wave, T wav and QRS complex. The features were further reduced from 360 to 12 through depending on of limb leads and removing features with high correlation. They obtained an identification rate of 100% 10 different features as the best performance.
< Biel, Lena, et al. “ECG analysis: a new approach in human identification.” IEEE Transactions on Instrumentation and Measurement 50.3 (2001): 808-812.>
In 2001, Kyoso et al. developed the ECG identification system through introducing a new way to discriminate between people a fiducial method using four features namely, PQ interval, QRS, the P wave duration and QT durations. In this experiment, they collect data from 9 subjects using ECGs. They obtained an identification rate of 94% using the QRS and QT features as the best performance.
<Kyoso, Masaki, and Akihiko Uchiyama. “Development of an ECG identification system.” Engineering in medicine and biology society, 2001. Proceedings of the 23rd annual international conference of the IEEE. Vol. 4. IEEE, 2001.>
In 2007, Wubbeler et al. investigated and evaluated the performance of human verification and recognition based on one single heartbeat of the two-dimensional heart vector. In this experiment, 234 ECG signals were recorded from 74 subjects. They achieved an EER of 2.8%.
< Wübbeler, Gerd, et al. “Verification of humans using the electrocardiogram.” Pattern Recognition Letters 28.10 (2007): 1172-1175.>
In 2008, Shen et al. presented an ECG recognition method beads on using the individual’s electrocardiogram (ECG). In this experiment, 20 subjects were involved. They obtained a correct identity verification of 95% and 80% for template matching and the DBNN respectively. In addition, the achieved a recognition rate of 100% when these 2 methods are combined.
<Irvine, John M., et al. “eigenPulse: Robust human identification from cardiovascular function.” Pattern Recognition 41.11 (2008): 3427-3435.>
In 2010, Odinaka et al presented an electrocardiogram biometric using a short-time frequency method with robust feature selection method to reduce the dimensionality and improve the performance. In this experiment, the recorded one lead ECG signals from 269 subjects on 3 sessions over 7 months. They obtained an EER of below 1%.
< Odinaka, Ikenna, et al. “ECG biometrics: A robust short-time frequency analysis.” Information forensics and security (wifs), 2010 ieee international workshop on. IEEE, 2010.>
In 2013, Zhao et al. proposed an ECG identification system based on ensemble empirical mode decomposition. In this experiment, standard MIT-BIH ECG databases which are: the ST change database, the PTB database and the long-term ST database were used. Before extracting the subject heartbeats, they eliminated the noise of the ECG signal through wavelet decomposition. They obtained an identification accuracy of 95% for 90 subjects chosen from above databases using the K-nearest neighbor classifier.
< Zhao, Zhidong, et al. “A human ECG identification system based on ensemble empirical mode decomposition.” Sensors 13.5 (2013): 6832-6864>
< Antonio Fratini, Mario Sansone, Paolo Bifulco, and Mario Cesarelli, “Individual identification via electro- cardiogram analysis,” Biomedical engineering online, vol. 14, no. 1, pp. 78, 2015.>
In 2017, Salloum, Ronald, and Kuo proposed an efficient ECG biometrics using recurrent neural networks. In this experiment, two public datasets namely ECG-ID and MIT-BIH Arrhythmia are used to evaluate the efficiency of this method. They obtained an EER of 0% when the percentage of subjects used for training is increased to 80%.
< Salloum, Ronald, and C-C. Jay Kuo. “ECG-based biometrics using recurrent neural networks.” Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on. IEEE, 2017.>
In 2017, Passos et al improved the performance of the ECG biometric recognition by means of twenty symbolic representations of time applied on the extraction of non-fiducial features. In this experiment, they performed the study on 337 people from two publicly available datasets namely MITDB and PTBDB. On average, they achieved an accuracy rate of 98.8% for the PTBDB dataset using the SAX-Kmeans classifier and 98.3% for the MITDB dataset using the ESAX-Kmeans classifier.
< dos Santos Passos, Henrique, et al. “Symbolic representations of time series applied to biometric recognition based on ECG signals.” Neural Networks (IJCNN), 2017 International Joint Conference on. IEEE, 2017.>
In 2017, Yu, et al. improved the performance of ECG biometric recognition by using PCA- backpropagation approach. In this experiment, ECG signals of 88 persons were obtained from the ECG-ID database and used. They obtained a recognition accuracy of 96.6%.
< Yu, Jinrun, et al. “ECG Identification Based on PCA-RPROP.” International Conference on Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. Springer, Cham, 2017.>
electroencephalogram is the process of measuring and recording the different brain electrical activities.
In 2001, Paranjape. Et al. suggested the use electroencephalogram as biometric identification for human. In this experiment, they collected 349 EEG recordings from 40 users. They obtained 100 % Correct Classifications when all data is used.
< Paranjape, R. B., et al. “The electroencephalogram as a biometric.” Electrical and Computer Engineering, 2001. Canadian Conference on. Vol. 2. IEEE, 2001.>
In 2005, Palaniappan and Ramaswamy presented a EEG biometric that identifies people by means of parametric classification of multiple mental thoughts. In this experiment, they collected EEG signals from four people while thinking of one or up to five mental thoughts. They calculated the autoregressive features and classified them using Linear Discriminant Classifier. They obtained an average error rate of 2.60% for a single mental thought, and average error rate of 0.1% for five mental thoughts.
<Palaniappan, Ramaswamy. “Multiple mental thought parametric classification: A new approach for individual identification.” International Journal of Signal Processing 2.1 (2005): 222-225.>
In 2007, Palaniappan et al presented an EEG biometric for identifying individuals with advanced feature extraction method. In the experiment, EEG signals were recorded from 61 active channels of 40 subjects. In the extraction method, they consider the EEG signals that have the high frequency energy and then they minimized the size of the features by using Davies Bouldin Index method which selects the optimal channels. They obtained a recognition rate of 98.56T1.87% over all.
<Palaniappan, Ramaswamy, and Danilo P. Mandic. “EEG based biometric framework for automatic identity verification.” The Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology 49.2 (2007): 243-250.>
In 2008, Sun presented EEG biometrics using multitask learning. In this experiment, NIPS 2001 BCI database collected at Pennsylvania was used. Nine subjects were asked to visualize moving their right or left index finger in reaction to foreseeable visual cue. They used the common spatial patterns method for feature extraction purpose. They achieved a classification accuracy of 94.81%and 95.6% for right index and Left index movements respectively using Neural Networks.
< Sun, Shiliang. “Multitask learning for EEG-based biometrics.” Pattern Recognition, 2008. ICPR 2008. 19th International Conference on. IEEE, 2008.>
In 2009, Alessandro, et al presented a multimodal authentication algorithm based on EEG and ECG signals. They collected the EEG signals from 40 subjects using two electrodes. Then, they split the data to 4 seconds epochs. one channel and synchronicity features were extracted. They achved a False Acceptance Rate (FAR of (21.8%) using Fisher’s Discriminant Analysis. When they combined the EEG and ECG signals, they obtained a FAR is 0.82%.
< Riera, Alessandro, et al. “Multimodal physiological biometrics authentication.” Biometrics: Theory, Methods, and Applications (2009): 461-482.>
In 2014, La Rocca et al proposed an EEG barometric recognition by fusing spectral coherence-based connectivity between different brain regions as a possibly viable biometric feature. In this experiment, a dataset of 108 subjects was used given that eyes opened and eyes closed while in rest state. They obtained a recognition accuracy of 100% in eyes opened and eyes closed states.
<La Rocca, Daria, et al. “Human brain distinctiveness based on EEG spectral coherence connectivity.” IEEE Transactions on Biomedical Engineering 61.9 (2014): 2406-2412.>
In 2014, Phung et al. presented an effective EEG biometric by means of introducing a feature extraction method that obtains brain wave features from various brain rhythms of EEG signal. In this experiment, the Australian EEG dataset, which was gathered in the John Hunter Hospital, was used. This dataset consists of 40 subjects. The Shannon Entropy (SE) of alpha, beta, and gamma bands’ PSD were extracted as features. They obtained a classification rate 97.1%. When comparing the classification rate of conventional AR-based features with Shannon Entropy, the AR-based features wins. But they claimed Shannon Entropy feature in much better in term of much faster recognition speed.
< Phung, Dinh Q., et al. “Using Shannon Entropy as EEG Signal Feature for Fast Person Identification.” ESANN. 2014.>
In 2015, Marcos, et al. investigated the time-frequency representation of the EEG signals for EEG Biometric. In this experiment, they used six public databases namely Keirn, Yeom, Zhang, BCI2000 , DEAP and Ullsperger databases. They found that EEG information below 40 Hz can be used to discriminate between subjects uniquely.
< DelPozo-Banos, Marcos, et al. “EEG biometric identification: a thorough exploration of the time-frequency domain.” Journal of neural engineering 12.5 (2015): 056019.>
In 2016, Zhendong et al enhanced the performance of EEG biometric using feature extraction based on fuzzy entropy, and feature selection based on Fisher distance. In this experiment, data was captured from 10 subjects were using two electrodes namely FP1 and FP2. They achieved an accuracy rate above 87.3% using back propagation (BP) neural network.
< Mu, Zhendong, Jianfeng Hu, and Jianliang Min. “EEG-based person authentication using a fuzzy entropy-related approach with two electrodes.” Entropy 18.12 (2016): 432.>
In 2017, Kavitha improved the performance of the EEG Biometric system by means of power spectral density features. In this experiment, 19 EEG channels were recorded from 109 subjects given that eyes opened and eyes closed while in rest state. They obtained an equal error rate of 0.0196% using simple correlation-based matching processing.
< Thomas, Kavitha P., and A. P. Vinod. “EEG-Based Biometric Authentication Using Gamma Band Power During Rest State.” Circuits, Systems, and Signal Processing (2017): 1-13.>
In 2017, Zhendong presented a high performance EEG biometric achieved by extracting EEG signals features using four types of entropies. In this experiment, EEG signals were recorded 16 subjects. They obtained an accuracy of 90.7%, on average.
< Mu, Zhendong, et al. “Comparison of different entropies as features for person authentication based on EEG signals.” IET Biometrics (2017).>
PPG is a non-invasive electro-optical method which gives information about the volume of blood flowing through a testing zone of the body, close to the skin.
In 2003, In Gu et al. proposed a PPG biometric verification system based on the Photoplethysmography signals acquired from the fingertips. In this experiment, the PPG signals were captured from subjects. The extracted features are peak number, upward slope, time interval and downward slope. A statistical analysis was applied on the extracted feature to ensure their uniqueness. They obtained a successful rate of 94% using the Euclidean distance.
< Gu, Y. Y., Y. Zhang, and Y. T. Zhang. “A novel biometric approach in human verification by photoplethysmographic signals.” Information Technology Applications in Biomedicine, 2003. 4th International IEEE EMBS Special Topic Conference on. IEEE, 2003.>
< Anand, Abhinav, et al. “Enhancing fingerprint biometrics in Automated Border Control with adaptive cohorts.” Computational Intelligence (SSCI), 2016 IEEE Symposium Series on. IEEE, 2016. >
In 2016, Darshan et al, utilized the yttrium aluminate nanoparticles to enhance the fingerprint quality for the first time. The experiment results showed that YAlO3:Sm3+ phosphor is a versatile fluorescent label for the facile detection of fingerprint marks on virtually any material, enabling their practical applications in Forensic Sciences as well as display devices.
In 2002, Lily and Grimson identified person and classified gender based on a simple representation of human gait appearance based on moments computed from the silhouette of the walking person.
Cite This Work
To export a reference to this article please select a referencing stye below: