Companies not managing societal impacts of facial recognition technology

But investor group engagement reveals best practice

|

Natasha Turner

Facial recognition technology (FRT) poses a number of human rights risks including the potential for racial and gender bias, and invasion of privacy, but some companies including Microsoft, Motorola and Thales are making strides in addressing this.

These are the findings of the FRT investor initiative, launched in March last year by Candriam and joined by June this year by 55 investors representing more than $5trn in assets under management, who signed an Investor Statement on Facial Recognition committing them to addressing the risks raised by FRT products and services and engage with companies on their FRT activities and human rights policies.

Working with 20 other investors, including Aviva Investors, Columbia Threadneedle, NZ Super Fund, Robeco and Domini Impact, Candriam has now surveyed 15 companies involved in FRT to understand how they assess, manage, and mitigate human right risks.

Its interim report, Investor Engagement on Human Rights Risks of Facial Recognition Technology, found companies displaying best practice were more likely to welcome regulation in the industry, and put strong governance codes in place.

“As investors in the technology sector, we have an important role to play in encouraging companies to identify, manage, and to mitigate human rights risks in their use of artificial intelligence and FRT,” said Benjamin Chekroun, engagement analyst at Candriam.

Louise Piffaut, senior ESG analyst from Aviva Investors, added: “With regulation remaining limited in the technology sector, companies are not yet taking full account of their responsibilities in managing the societal impacts of FRT, depending on where in the value chain they are involved. Investors have an important role to play in pointing to best practice and engaging with companies on the issue.”

Best and worst practice

The report highlighted the software and semiconductor industries as leaders and laggards, respectively. “If we were to make any generalisations, we might conclude that the closer an organisation is to the algorithm, the greater awareness the company has,” it said.

“Thus, in this sample, the software industry seemed to display the strongest practices and respect for human rights.

“Conversely, despite developing chips specifically for facial recognition systems, semiconductor companies showed little interest for the potential mis-use of their devices further down the chain. Is it possible a company considers itself to have less responsibility when it is further away from the end use of the product?”

The investor group found Microsoft, Motorola and Thales stood out. Microsoft has put in place strong governance around the ethics of artificial intelligence and specifically FRT and has recently retired facial classification capabilities. In addition, Microsoft is one of the first technology companies to put a moratorium on the sale of FRT to law enforcement.

Motorola has said FRT should assist humans rather than make consequential decisions and has built two-factor authentication into its technology. And Thales’ FRT assisted passport control requires stringent monitoring and makes a point of destroying user data after each passage.

“We have mapped company practices and presented those we believe represent today’s best,” the report concluded.

“Even with these, some of the expectations we had listed in our initial FRT statement, such as the existence of effective grievance mechanisms, remain untouched.

“The next step is to discuss with each company how identified best practices could be implemented in their own organisation.”

The outcome of this second campaign of dialogues is planned for release in 2023.

Latest Stories