Rite Aid has been banned from using facial recognition software for five years after the Federal Trade Commission (FTC) found that the US pharmacy giant’s “careless use of facial tracking systems” humiliated customers and “put their sensitive information at risk.”
The FTC Order, which is subject to approval by the U.S. Bankruptcy Court after Rite Aid filed for Chapter 11 bankruptcy protection in October, also orders Rite Aid to delete all images and all products it collected as part of the rollout of its facial recognition system. constructed from these images. The company also needs to implement a robust data security program to protect the personal data it collects.
A 2020 Reuters report detailed how the drugstore chain secretly deployed facial recognition systems in nearly 200 U.S. stores over an eight-year period starting in 2012, with “largely low-income, non-white neighborhoods” serving as test beds for the technology.
As the FTC increasingly focuses on the abuse of biometric surveillance, Rite Aid has become the focus of the government agency’s attention. The allegations include that Rite Aid, in partnership with two contract companies, created a “watchlist database” containing images of customers the company said had engaged in criminal activity at one of its stores. These images, often of poor quality, were taken from CCTV or employees’ mobile phone cameras.
When a customer entered a store that supposedly matched an existing image in the database, employees would receive an automatic alert instructing them to take action, and most often that instruction was to “approach and identify,” meaning to verify and verify the customer’s identity. We ask them to leave. These “matches” were often false positives that led employees to falsely accuse customers of wrongdoing, causing “embarrassment, harassment and other harm,” according to the FTC.
“Acting on false positive alerts, employees followed consumers around their stores, searched them, ordered them to leave, called the police to confront consumers or remove them, and publicly accuse them of shoplifting or other crimes, sometimes in front of friends or family. ” the complaint reads.
Confrontation
Facial recognition software has emerged as one of the most controversial aspects of the age of AI-powered surveillance. Over the past few years, we’ve seen cities impose sweeping bans on this technology, while politicians struggle to regulate how police use it. Meanwhile, companies like Clearview AI have also faced lawsuits and fines around the world for major data privacy breaches related to facial recognition technology.
The FTC’s recent findings regarding Rite Aid also shed light on inherent biases in artificial intelligence systems. For example, the FTC says Rite Aid failed to reduce the risks to certain consumers because of their race — its technology was “more likely to produce false positives at stores located in pluralistic Black and Asian communities than in predominantly white communities.” findings note.
In a press release, Rite Aid said it was “pleased to reach a settlement with the FTC” but disagreed with the substance of the allegations.
“The allegations relate to the Company’s facial recognition technology pilot program implemented in a limited number of stores,” Rite Aid said in its statement. “Rite Aid stopped using technology in this small group of stores more than three years ago, before the FTC’s investigation into the Company’s use of technology began.”