Passwords and identification are a massive pain to carry around and memorize. According to a study from Cyber Streetwise, the average British citizen needs to recall 19 passwords to access all of their online applications. Websites even require you to write a password with 20 characters and multiple of these annoying symbols like $, *, or # in it, just so you’d write it down and forget where you left that piece of paper.
It’s no wonder there’s so much interest in biometrics. Biometric protection is using yourself as the password or a form of authentication rather than remembering a password. It uses your biological features such as your fingerprint, iris, voice, and facial recognition which makes it very convenient and hard to steal.
Biometric identification also allows you to identify people and criminals based on their physical characteristics. The potential in biometrics extends as far as the iris can see, but we are going to see how each biological feature works.
How does Fingerprint Recognition Work?
This is the most efficient and common form of biometrics in today’s age, and it has developed quite a bit. However, to learn how fingerprint recognition works, you must figure out how touchscreens work.
There are two different touchscreens called resistive and capacitive displays. Resistive displays are the simplest type of touchscreens, and they are mostly present in printers, GPS’ or even in your old Nintendo DS.
The way this technology operates is simple; the user pushes on-screen hard enough for the screen to bend and resist your touch, hence the name resistive. Well, there is more to it than just that because the resistive display consists of 2 different layers that can conduct electricity, however, while one layer is resistive (the inner layer) the other is conductive.
Between these two layers are tiny dots called spacers that separate the layers until the user touches the screen, so when there is contact, the layers get squished against the spacers and the electric current changes there. The device can detect an electrical current change in any particular spot, meaning that the software will complete the function that corresponds with that place.
On one hand, resistive screens are reliable and durable while on the downside, they are pretty hard to read due to their multiple layers, especially when light shines on them. If you ever tried to use an ATM on a sunny day, you will see it is impossible to read anything on the screen! You also cannot zoom in on resistive screens and it only supports one touch at a time. So now that we’ve seen resistive screens, let's move on to capacitive.
When you scroll through Snapchat or Instagram on your phone, you have a capacitive screen. Instead of using pressure to activate your device, the user just needs to make contact with the screen with anything that holds an electric charge, just like your finger.
The screens are made from either indium tin oxide or copper oxide, both of which are great at conducting electricity and keep electrical charges in microscopic wires.
So when the user’s finger touches the screen, it will send a tiny electric charge through and back to your finger, creating a complete circuit that leads to a voltage drop in a certain area. The software can then analyze where that voltage drop occurred and follow the user’s command.
This also explains why these screens don’t respond when you touch it with gloves since most gloves don’t conduct electricity, or when you touch it with wet fingers since water conducts electricity and the system cannot detect where your finger is.
So now that we know both types of screens, which type of screen does fingerprint recognition use? It would be capacitive! Fingerprint readers adopt this concept for personal identification by cramming lots of tiny conductive plates into the space below the scanner surface.
These plates are narrower than the ridges of your fingerprint so when you place your fingertip against the reader, the sensors can tell which portions of your finger are touching the sensors and which are not. This forms a virtual image of the ridges of your finger which make up your fingerprint, so whenever a finger goes against the scanner, the software compares the two fingerprints together and if they are a match, the door will open.
There are also different types of fingerprint scanners such as optical scanners that use visible light to capture your fingerprint or ultrasonic fingerprint readers that use echolocation like a bat to detect the ridges of your fingerprint.
Echolocation scanners even work through other materials like glass and aluminum, meaning they can be put under the screen of the phone, or they can read your fingerprint even when it is dirty. This all sounds great and they seem like a highly secure technology, but they certainly aren’t foolproof, no matter what variety you are using.
Think about how the scanner on your phone will work, even if you don’t place your finger in the exact same position every time. This is because the scanner only requires a partial match to open, so you can open your phone from many different angles.
Although even though master fingerprints are typically secured in some way like how Apple encrypts their touch ID fingerprint and store it locally, it’s still possible to bypass fingerprint readers using molds of other people’s fingerprints. This is also why most cellphones also require a pin or password to protect your phone other than your fingerprint.
However, even with these flaws, fingerprint readers are still amazing at their job and many companies such as Apple have incorporated fingerprint readers in their products as far back as 2013.
Other phone companies like Samsung are doing the same today and we even have technology that allows us to withdraw and deposit money from the bank with a single touch of a finger.
How does Iris Recognition Work?
Iris recognition is a biometric technology with recognition based on the patterns of the human iris. In the 1980s, two ophthalmologists Aran Safir and Dr. Leonard Flom proposed that no two irises are the same, even in twins, which makes iris recognition a good authentication unit.
The iris also never changes over the span of a lifetime, unlike your face or voice. They determined this by looking at the features of the iris which included crypts, coronas, colors, pits, contraction furrows, striations, freckles, and rifts.
The first stage of iris recognition works by scanning the eye 4 to 13 centimeters away with an infrared camera. The pattern of the iris is then isolated from the rest of the image. By using an algorithm from John Daugmanm, the patterns of the iris are encoded into a 512-bit code called the iris code.
From there, the iris code is immediately encrypted into an iris signature to avoid theft. After that, it is then compared to other codes that are stored in the database for matching patterns and recognition. An individual’s iris signature cannot be restored or replicated making it very unique and safe to use.
Iris recognition has usually been used in fields that require high security such as labs, space stations, and government buildings, but now it has been popularized for broader applications. Some examples are when you go through an immigration office to get your identity checked, or when hospitals identify patients to treat.
The industry that has the most potential for iris recognition is the fintech industry. This means anything that involves transferring money such as a deposit from the bank or a stock exchange can be done simply with the look of an eye. A Japanese phone company called Fujitsu has also offered the world’s first iris recognition phone unlocking which is slowly being integrated into the market.
How does Voice Recognition Work?
In the early forms of voice recognition, they had limited vocabulary, being only able to recognize about 10 words in the 1950s. Thirty years later and that number was only 20,000 words which may sound like a lot, but keep in mind the English language has over 1 million words.
Softwares back then also could not predict the context of your sentence, so to these programs, it would make sense if you said “I have a pet hat” instead of “I have a pet cat”.
Today, however, we have applications like Siri and Cortana that can understand full sentences and perform whatever action you desire. So how do you apply this to biometric security?
Voice recognition is the only behavioral trait out of the other 3 physical traits of biometrics, meaning it is something you do not physically have but something you can express. A voice recognition system is a program that can understand dictation or spoken instructions.
When this system is used within a computer, the analog signal projected by the human voice must be converted into a digital signal so that the computer can understand what you’re trying to say. It does this by using an electronic integrated circuit called an ADC (analog to digital converter) to convert the voltages coming from your mouth into a binary form consisting of 1s and 0s.
However, the voice recognition system does not always produce an accurate output due to background noises or heavy accents. Similar sounding words like “read “ and “red” can also confuse the system, so the computer requires faster processors and more RAM to understand the context of these words.
A voice recognition security system can be designed with 3 main elements which is a microphone circuit, a PIC microcontroller (peripheral interface microcontroller) which can be coded with an assembly or C language, and an LCD display (liquid crystal display).
The design of this system is surprisingly simple compared to the other biometric systems. The microphone circuit is connected to the ADC of the PIC microcontroller which translates the analog signal to a digital signal. From there, a programming system called The Owl Embedded Python System containing the user’s instructions is loaded into the microcontroller electronically and transports the digital signal through the controller into the LCD display.
Once it reaches it, the LCD display will determine if the spoken word matches with the inbuilt password or not. So now that we know how it works, let's break down the parts a little.
A microphone is a sensor and a transducer used to convert the soundwaves from your mouth into an electrical signal. The sensor is a physical device that senses a physical quantity (in this case, your voice) and converts it into electrical signals (in this case, an analog signal) that can be read by an instrument.
From there, the transducer’s job is to convert one form of energy into another form, and in this case, it converts the analog signal into a digital signal for the computer to read. Sound familiar? It’s the ADC at work again!
The microcontroller or MCU is the computer of the system and it has low power consumption, self-sufficiency, and high integration, making them an affordable and efficient method to control electronic devices.
Microcontrollers usually integrate extra elements like ROM for storage code, and read-write memory for storing the data in the input/output interfaces, and peripheral devices of the microcontroller. These controllers are used in automatic controlled devices such as remote controls, powers tools, and even kid toys.
Lastly, the LCD display is a flat and thin display made of monochrome pixels arranged in front of a reflector. The LCD display used in this project, however, is an alphanumeric variant which means it displays alphabetical, numerical, and symbolic characters from the ASCII character table.
This is so the LCD display can compare the words spoken to the inbuilt password to see if they match up. This device is also found in small electronics powered with a battery due to its low power capacity.
How does Facial Recognition Work?
This is the last major feature of the human body we can use for biometrics, and it has developed quite a bit. Facial recognition is something humans have evolved to do and we have a whole area of our brain dedicated to it called the fusiform face area.
This excels at recognizing recurring patterns, and faces are just another pattern which is why humans are well developed to memorize things. Most of our facial recognition technology is built off of this area. What the computer would do is divide the face into visible landmarks called nodal points which could include the depth of an eye socket or the distance between the eyes.
These differences then make up a unique code called a person’s faceprint. However, to get a correct match, the system needs to have nearly identical photos, and you rarely get the same view of your face in 2 photos.
Our faces constantly change unlike fingerprints, which are static and don’t change. Because of this, the 4 main issues that come up with facial recognition are aging, pose, illumination, and emotion. They are known as the A-Pie problem.
However, a 3d recognition system called DeepFace can take a 2d image and create a 3d model. This allows pictures from different angles to be compared which solves the posing issue. This also solves the emotion issue since DeepFace will create a 3d model that would have the same emotion as the 2d image.
Illumination can be easily solved through night vision or infrared camera. However, with infrared, you may not be able to see all the details of the face. Aging is no longer a problem either since DeepFace scans for areas on the face that have rigid tissues and bone such as the chin, and these features don’t alter much as we age.
A type of machine learning called deep learning mimics the human brain and uses neural networks to classify information. This plays a huge role in facial recognition.
This technology is completely independent of human modification and is based on trial and error so each time the Deep face algorithm incorrectly matches 2 different faces, it remembers the steps it took and where the error exists, creating a network of steps through the nodes of the neural network until it manages to get a correct result. Think of it as a self-reliant machine, learning from its mistakes to improve its performance.
Facebook’s neural network has 20 million connections as of 2015 which will only increase for every photo that gets upload. The larger the data set, the better the computer is at doing tasks like recognizing faces with accuracy, like Facebook’s AI lab ability to identify faces with an accuracy of 97.35%.
The reason their AI is extremely accurate is that the data needed to train the AI is in the form of a library of 4.4 million labeled faces taken from the profiles of 4030 users. Other than Facebook, facial recognition has many applications in the real world like identifying criminals in shops, events, and in houses.
For cases of fraud, many banking companies such as CIBC have fingerprint recognition, but Mastercard, Alibaba, and iProov are looking into a technology that can confirm a credit card purchase with a selfie.
Another way this tech could be useful is by helping people with prosopagnosia, a condition that gives the patient trouble with identifying faces, including their own face, so having a device that can scan faces for them can make their lives a lot easier.
A company called Mondelez Internation has even developed smart shelves in grocery stores which are cameras that can identify your age and gender. From there, the shelves can offer you suitable products and deals for you depending on how the person looks, for example, if someone looked like they had a fever, the shelf can recommend Tylenol. There are so many possibilities with facial recognition and that was only scratching the surface.
The Potential of Biometrics
Having a digital key on you that can never get lost, stolen, and can access anything from your phone to your bank account is incredible. Think about how much humanity has advanced from a lock and key to full-fledged security algorithms that were only possible in James Bond movies. Biometrics has even advanced into all of our daily lives doing simple tasks, such as identifying ourselves in airport security.
Even though we have all this tech, biometrics is still in a developing stage as an emerging technology which is great, because it unlocks the door for many potential companies to develop innovative ideas. Who knows what part of your body you could use to pay for your groceries.
- Biometrics are biological measurements that can be used to identify a person. They are used for security purposes and can be physical or behavioral.
- The 4 main biological features that are observed by biometric tech are fingerprint, iris, voice, and facial recognition.
- Fingerprint recognition uses capacitive screens made from indium tin oxide or copper oxide. They use tiny conductive plates to scan the ridges of your finger and form a digital fingerprint.
- Fingerprint recognition only requires a partial match to open, meaning it is easily accessible but less resilient against attackers.
- No two irises are the same and the iris never changes over time, making it a great biological feature to measure.
- The pattern of the iris is scanned to make a 512-bit code called the iris code. From there it is encrypted into an iris signature where it will be compared to other codes stored in the database for matching signatures.
- Voice recognition is the only behavioral trait out of the 3 biological features, meaning you express this trait rather than physically having it like your fingerprint or iris.
- A microphone circuit, a PIC microcontroller containing an ADC, and an LCD display are all that are required to build a voice recognition system.
- The facial recognition system divides your face into visible landmarks called nodal points and can create a 3d image of your face through Deepface.
- Deep learning plays a huge role in facial recognition by mimicking the fusiform face area of the brain and using neural networks to categorize faces.
- The applications of biometrics are endless and are still an emerging technology we will all see.
If you want to see more articles about biology, technology, and other things, you can subscribe to my Medium to check out the rest of my articles. You can also check out my Linkedin and subscribe to my newsletter. Thanks for reading!