Responsive Image

Alibaba Admits Developing Racist Uyghur Recognition

Alibaba Admits Developing Racist Uyghur Recognition

IPVM, 22 December 2020

Below is an article published by IPVM.

Alibaba admitted its Cloud division developed racist AI software, saying it is “dismayed” while claiming it “never intended” to target “specific ethnic groups” and the tech was only used “within a testing environment”.

Developing such software takes complex steps that intentionally target Uyghurs, contrary to Alibaba claiming it “never intended” to target them. Alibaba has also refused to provide any proof this was just a ‘test’ or ‘trial’ – its own website showed this was a live feature.

Finally, Alibaba’s statement was not published in Chinese, despite the racist tech being only available on Alibaba Cloud’s China website, allowing Alibaba to appease an international audience while avoiding the risk of upsetting the PRC government.

Background

On December 16, 2020, IPVM and The New York Times reported that Alibaba’s cloud division openly offered Uyghur detection as part of a content moderation solution in an API guide.

The news was picked up by major media outlets across the world including CNN, the BBC, Reuters, France’s leading newspaper Le Figaro, Turkey’s leading newspaper Hurriyet, SCMP in Hong Kong, etc.

Alibaba’s New Statement Says “Dismayed”

Before the investigation was published, Alibaba only issued a curt statement that this software was used “within a testing environment”. However, as the story spread internationally, Alibaba issued a new statement that it was “dismayed” its Cloud division had developed this software, claiming Alibaba “never intended” to target “specific ethnic groups” but remaining firm this was only a “trial” anyway.

By Definition, Uyghur Detection Targets “Specific Ethnic Groups”

Alibaba claims “We never intended our technology to be used for and will not permit it to be used for targeting specific ethnic groups”.

However, developing Uyghur detection, by definition, requires “targeting specific ethnic groups”. Computer vision depends on large training sets with labeled images of Uyghurs and non-Uyghurs in order to train the AI to pick out Uyghurs. This is not a process that happens accidentally or unintentionally.

‘Trial’/’Testing’ Claims: Unproven, Technically Dubious

Alibaba also claims this was a “trial technology” with Alibaba Cloud earlier stating Uyghur recognition was only deployed “within a testing environment”.

However, the API guide showing Uyghur recognition made no mention of testing anywhere. API guides are meant to help customers utilize existing, functioning software.

Alibaba China Keeps Silent

IPVM verified that Alibaba has not responded to this issue on its China platform or social media channels, despite the fact that Uyghur detection was only offered on Alibaba Cloud’s China website (not its International one) while the vast majority of Uyghurs live in the PRC.

Publishing a statement in China saying it is “dismayed” at Uyghur analytics risk upsetting the PRC governmGet Notified of Video Surveillance Breaking Newsent, which has required Uyghur analytics for police usage.

No Response From Alibaba

IPVM brought up all these points in a request for comment to Alibaba, however, Alibaba has not responded. If they do follow up we will update.

Conclusion

Clearly, Alibaba is hoping that this issue will blow over, with investors/media taking its misleading statement as a sign the situation has somehow been resolved.

Regardless, the evidence is clear that Alibaba specifically targeted Uyghurs and not in a harmless trial, but by directly offering Cloud clients this racist software. Shameful corporate spin cannot detract from this.