For some, replacing passwords with biometrics is the answer. No more logging in to platforms each time you need to access something. No more two or three step verification, copying and pasting codes from email or text to validate your login. And especially no more having to create and remember multiple, 11-character passwords with a capital letter, number and special character included. Instead, you simply look into the camera on your mobile, blink and you’re validated. Much simpler for the user.
Because of this, many companies are going the way of using biometrics for securing access to accounts, especially in the finance sector. The challenge is managing all that biometric data. If we thought gaining personal customer data was valuable; names, addresses, social security numbers? How much more valuable is a photo or video identification with biometric markers that are unique to each user?
Biometrics might make access to app significantly easier, but it’s use brings up a number of ethical questions. When it comes to regulation, only a handful of countries have robust laws that focus on the protection of personal information that includes biometrics. The USA, unfortunately is not one of them. Some individual states and cities have drafted their own regulations, but these can be difficult to police when tech use is often not limited to city limits or federal boundaries.
Some of the concerns about biometrics are not just around regulation, but consent of use, privacy, how the data is used and stored. A major concern is the potential for unauthorised use and what measures are in place to prevent this. What happens to manage the systems when a breach does occur?
With the potential of AI to manipulate data, is using biometrics playing right into the hands of threat actors? Is what we think a more secure system, actually providing access to data that, when misused, could be highly damaging to individuals and even certain groups of people?
We are already dealing with the challenges of increased polarization and bias in society. Specifically, biased profiling according to race or gender. Feed biometric data into security or corporate systems that augment those biases and there’s real potential for harm. We’re not just talking about access to banking apps or airport security. It could be something as simple as declining a request for credit or a home loan, or worse, medical treatment. That translates into people being denied support and services because of their biometric data and bias that is built into systems.
With the risk of personal data exposure already high in all industries, will companies using biometric data augment their security efforts? Will they operate with transparency, being accountable for how biometric data is collected, stored and used?
If the purpose of using biometric data is to create more secure and user-friendly systems, then companies have a responsibility to treat that data as critically valuable. This means giving careful consideration as to who has access to the data, even internally. Who can change it, delete it, and use it? If biometrics are going to add the next level of secure access and convenience, will companies manage it responsibly?