Category: Technology Written by Matthew Stern
And why we should all be worried
Hoan Ton-That is an Australian techie responsible for what could be the greatest impact on personal privacy the world has ever seen. The one-time model’s previous tech output consisted of an iPhone game and an app that let users put Donald Trump’s hair on their own images.
Then in 2017, Ton-That put his skills to rather different use and founded Clearview AI. The company’s product is a facial recognition app that allows users to take an image of a person then search a database for other images of that person. From that info, attached data such as last known addresses, phone numbers in some cases, and any open account information is in the hands of the searcher. Clearview claims to have over three billion images, which it has scraped from public social media accounts, assorted websites, YouTube, and more.
The app is marketed to law enforcement agencies around the world, the New York Times reports 600 agencies are currently employing the technology, and they did so without public knowledge or fanfare. Rather, Clearview’s use slowly crept in and it’s only now activists are sitting up and asking some serious questions. Some agencies have been using it without a government-mandated go-ahead or proper trials. In New Zealand, police tested the controversial facial recognition program before seeking clearance from NZ police higher-ups or the Privacy Commissioner. The agency is one of many who use an app that hasn’t yet been vetted by third-party experts.
Whether we like it or not, Mr. Ton-That’s work is here to stay in one form or another. In part, because it is remarkably affordable: when testing Amazon’s Rekognition in 2019, Washington County spent around US$700 for its first load of images and paid around US$7 per month for all its searches. Additionally, the tech is easy to implement and requires very little technical infrastructure. With facial recognition spreading widely it’s unlikely those using it will give up the bonuses it offers — and there are some undeniable benefits and positive applications.
One key example comes from May last year. In China, a man who was abducted as a two-year-old and then sold to a childless family has reunited with his biological parents thanks to facial recognition technology. After receiving a tip that an individual had bought a child in the 80s, Xian police used facial recognition technology to simulate an image of him as an adult, then compare it to images in an image database, just as Clearview works. However, no information about the database or the tools used were disclosed.
The technology has been put to use identifying and finding child sexual abuse victims and identifying terrorists. In 2016, Belgium police used FBI facial recognition tech to find “the man in the hat” who was responsible for the Brussels terror attacks.
While proponents can find plenty of examples such as the above, some big questions loom regarding the ethical implementations and the impingement on civil liberties. The phrase Big Brother seems worn out in 2020, yet there is no better way to sum up facial recognition’s impact. In the United Kingdom, Met Police are deploying live facial recognition cameras for the first time, the shadowy “they” suddenly feels a whole lot more tangible to many commentators.
And they’re not alone. Rights groups are now campaigning to put a stop to or limit the use and application of this technology. In some cases, they’ve been successful. A proposed facial recognition rollout at the University of California was halted after pushback from students in tandem with digital advocacy group Fight for the Future.
In 2019, US city San Francisco, a notable tech hub, became the first to ban facial recognition technology. Matt Cagle of the American Civil Liberties Union noted: "With this vote, San Francisco has declared that face surveillance technology is incompatible with a healthy democracy and that residents deserve a voice in decisions about high-tech surveillance.”
Criticism of facial recognition is often based on the technology’s unfortunate and dangerous bias: it has problems differentiating between and making accurate matches when faced with both people of color and women.
iPhone X users in China may have already experienced the effects of this first-hand: Apple’s Face ID biometric log-in feature was heavily criticized in the People’s Republic because it struggled to differentiate between Chinese faces.
In the US, a federal study confirmed that facial recognition technology is significantly less accurate when presented with people of color. African American faces were 100 more times more likely to be misidentified when compared to the faces of white men. The latter group is presumably the group on which the technology is trained. The technology also struggles with Native American and female faces.
Racial and gender biases aside, another troubling aspect of facial recognition is its blatant invasion of privacy. You didn’t ask to be featured in Clearview’s database of images, nor did you consent to have a web crawler trawl the net and pick out photos of you. Granted, you posted the images online to, for example, Instagram. But Clearview’s scrapers are pulling images from other online scrapers, such as Insta Stalker. Meaning that images you’ve decided to remove from a once-public account may still be loitering online.
Then there’s the fact that law enforcement agencies and governments might not be the only users. Reporters at Gizmodo managed to get the Clearview app. Although login is required before use, the Android native APK file was found on an “Amazon server that is publicly accessible.” Using that file, the reporters downloaded and installed the app.
Amnesty International notes that public facial recognition technology puts “..many human rights at risk, including the rights to privacy, non-discrimination, freedom of expression, association and peaceful assembly.” While facial recognition might one day be unavoidable, in the meantime we can guard the civil liberties we do have, and not take them for granted. Hiding our faces in public or using one of these interesting anti-facial-recognition tools, isn’t a viable privacy solution, but protecting our personal data with a VPN means we can avoid one aspect of supposedly regulated, state-sponsored surveillance.
Image from unsplash.com
Matthew Stern is a technology content strategist at TechFools, a tech blog aiming at informing readers about the potential dangers of technology and introducing them to the best ways to protect themselves online.
As a tech enthusiast and an advocate for digital freedom, Matthew is dedicated to introducing his readers to the latest technology trends and teaching them how to gain control over their digital lives.
Submitted Exclusively to CrystalWind.ca by Amy Cavendish © 2021 crystalwind.ca
CrystalWind.ca is free to access and use.
Spirit Animal Totem Of The Day!
Articles: AndEl: Technology
Who is Online Now
We have 856 guests and no members online