When Microsoft stopped selling its facial recognition technology to police, the company’s president, Brad Smith, said that it had a responsibility to protect human rights. But in emails released on Wednesday, Microsoft didn’t mention any human rights concerns or racial bias issues as it tried selling facial recognition tech to the US government.
Emails obtained by the American Civil Liberties Union showed that Microsoft had been in talks with the Drug Enforcement Agency from September 2017 to December 2018, in attempts to sell its artificial intelligence tools including facial recognition and voice recognition.
The released emails come a week after Microsoft announced that it wouldn’t be selling facial recognition to police departments until there was a “strong national law grounded in human rights.” Microsoft didn’t clarify whether that moratorium extended to federal agencies.
Microsoft didn’t respond to a request for comment.
“It is bad enough that Microsoft tried to sell a dangerous technology to a law enforcement agency tasked with spearheading the racist drug war, but it gets worse,” said Nathan Freed Wessler, a senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “Even after belatedly promising not to sell face surveillance tech to police last week, Microsoft has refused to say whether it would sell the technology to federal agencies like the DEA.”
Facial recognition is being challenged across the country, with calls for police reform pressuring companies like IBM and Amazon to also revise their policies on the surveillance technology.
The emails, obtained by a lawsuit from the ACLU and the ACLU of Massachusetts, showed conversations between the DEA and Microsoft executives, who had multiple demonstrations and workshops on how facial recognition could benefit the agency’s investigations.
“We are gathering requirements for AI services that could be leveraged for transcription, language translation, face recognition, and others. We are planning to extend our cloud environment to include AI services from Microsoft Azure Government (MAG) cloud,” a senior Microsoft employee wrote in an email to the DEA in November 2018.
In these emails, Microsoft marketed tools like its Faces API, which researchers from the MIT Media Lab discovered had racial and gender bias and was more likely to misidentify people of color. The tech giant also showed off its voice recognition tools, with emails claiming that the technology could analyze sentiments in the recordings.
The DEA didn’t respond to a request for comment.
The emails indicated that the DEA backed off from purchasing facial recognition from Microsoft, pointing to public scrutiny and concerns around privacy from the Government Accountability Office’s report on how the FBI used the technology.
Congress has also raised concerns about how the Justice Department has used facial recognition, pointing out that the FBI hasn’t met recommendations on accuracy, transparency and privacy.
The DEA’s surveillance capabilities are concerning for the ACLU considering the blanket authority the agency has been granted to spy on protesters. A BuzzFeed News report on June 2 found that the Justice Department gave the DEA permission to surveil protesters after the May 25 death of George Floyd.
“This is troubling given the U.S. Drug Enforcement Administration’s record, but it’s even more disturbing now that Attorney General Bill Barr has reportedly expanded this very agency’s surveillance authorities, which could be abused to spy on people protesting police brutality,” Wessler said.