Facial recognition needs global standards to improve data privacy
In conversation with SNS Mideast, Sheraun Britton-Parris, Chief Marketing Officer, VisionLabs highlights the use of AI in the facial recognition and the racial biases within facial recognition and how to overcome these, the biases in AI and how to create ethical AI, and much more.
How does AI work in facial recognition and what are the factors that affect the accuracy of facial recognition?
Artificial intelligence (AI) in facial recognition works by learning to recognize human facial features to then correctly identify a vast number of individuals. However AI and machine learning (ML) – regardless of how powerful they are – are nothing without data. If you think of AI as an engine, the data is the fuel. The more fuel you put in, the further the tech can go.
However, one of the issues the industry faces is the lack of diverse datasets driving the tech- meaning technology providers mostly expose their AI to the same type of data which limits its capabilities. The more diverse the data inputted in AI, the more inclusive and accurate the facial recognition technology is.
Also, the use of first-party data can adversely affect the inclusivity and effectiveness of facial recognition technology. At VisionLabs, we leverage 3rd party databases and purposefully seek diversity in the databases our AI is exposed to, ensuring the highest level of accuracy across all races and genders.
Other factors that impact general accuracy include camera quality, lighting and background, as well as the angle of the camera.
How can you know whether AI is working the way you need it to work?
There are two key ways to understand if the AI is working as expected: Testing and client feedback from live deployment.
First, testing is a crucial phase as it allows us to understand if the AI is working as it is intended. All software providers must use 3rd party testing, such as NIST to assess if their AI and technology are working as intended. VisionLabs’ software has been ranked first for 1-to-1, 1-to-many and racial bias by NIST – however of course we are always striving to improve and deliver the greatest digital identity experiences.
Second, clients collect data to understand if facial recognition is delivering value to their business. Clients will flag issues and successes of the technology and how it serves their business environment. The facial recognition providers leverage this feedback to understand their software’s strengths and weaknesses to improve it.
In your view, why is racial bias prevalent in facial recognition technology?
As discussed previously, one key issue is data. The less diverse datasets used in facial recognition technology, the more the bias becomes entrenched within their AI. There aren’t any frameworks or guidelines to help tech providers make their AI more inclusive, meaning that today it just isn’t.
The second key issue, one that spans many industries, is the lack of representation within the decision-making process. Those with the power to make decisions are homogenous in race and gender – white males – leading the AI engine holding their unconscious biases. Algorithms and AI are a direct reflection of the people that create them, so having a diverse team of decision makers is essential.
What are the major impacts that you see because of facial recognition bias?
The impacts of biased facial recognition cannot be understated. Due to the current issues of datasets and representation within the industry, one demographic is being valued more than all others.
People of color are subject to an abnormally high rate of false positives – meaning one individual is mistaken for another. When you consider facial recognition is employed by many security services and police departments the impact of false positives becomes apparent and is a prevalent issue. They have often led to the arrest and criminalization of people of color. It is not speculation to say bias within AI leads to life-altering events for people of color.
These divergent rates in false positives also mean people of color can be subject to higher rates of financial crime where facial recognition is deployed as verification. It also means that people from diverse backgrounds are being excluded from biometric experiences in travel, retail and much more.
How can facial recognition be improved?
In addition to the points raised earlier (facial recognition to be improved through the use of diverse datasets and more diversity across senior, decision-making teams), global standards should be introduced. These standards should improve data privacy and ensure that facial recognition providers meet minimum standards when it comes to the diversity of their data.
AI and biometrics should be developed sustainably and not to the detriment of the environment and/or society. AI creation should be carefully considered – it is one thing to have the technology and another thing to make the technology meaningful.
Visionlabs looks forward to working with organizations and partners globally to help shape a more just and inclusive future of AI.