Tuesday — July 20, 2021
When it comes to the ethical use of facial recognition, it’s true that not all facial recognition companies are the same. In fact, how facial recognition (FR) data is gathered and used can be very different depending on how the algorithms have been designed and who has control over any information obtained. When evaluating FR systems, two key questions arise:
• Does the system show any bias when recognizing people of different skin tones, genders, or ages?
• Who owns any data associated with the system and what can they do with it?
We’ve built artificial intelligence systems to help us make decisions, perform tasks, and understand the world. We ask these AI systems to make judgments on our behalf all the time — is this person at the door a threat? Is it safe for my self-driving car to proceed? Is this person creditworthy? Bias is human, but it’s not what we want of our computers.
The first step toward removing bias in our machines is to not classify people in unscientific ways. If AI and machine learning are to fulfill the promise of providing objective data classification, we need to use sound scientific training data in our models with plenty of diversity. Training a model that tries to determine someone’s ethnicity or race from their face is about as valuable as building a model that tries to determine someone’s astrological sign — and similarly unscientific. Ethnicity is a complex mix of heritage and culture and impossible to categorize by appearance.
A recent NIST study focused on measuring bias rates showed alarming levels of bias in many facial recognition algorithms, including false-positive rates for Asian and African American faces 10 to 100 times higher than for white faces. This is simply unacceptable. To train a model to recognize individuals with a uniform accuracy across a spectrum of faces, it’s imperative to source a diverse training set of faces from around the world with strong representation by age, gender, skin tone, and geographic origin.
Follow the Data
Privacy and security are key concerns for those skeptical of face recognition technology, as well as users and potential users. When evaluating FR solutions, it’s important to ask ‘who owns the data?’ In some cases, the FR company will own the data and may even sell it to third parties. There are, however, responsible providers of FR technology that ensure that end-user owns their own data and makes it simple to protect privacy by incorporating such tools into the design, including functions for opting in and out. In many common deployments, providers don’t collect any personally identifiable information at all.
In a world where data brokers already buy and sell detailed profiles that describe who we are based on public records, shopping habits and social media interactions, companies can easily figure out your gender, age, location, income level, relationship status, exact location, and much more.
Ensuring the ethical use of facial recognition is a goal we all should share. Asking the right questions, educating ourselves about bias and following the data can hold the industry and vendors accountable. There’s no reason to sacrifice privacy to make our world a safer place.
July 19-21, 2021 • www.iscwest.com