Microsoft Calls For Regulation Of Facial Recognition Technology

by Samuel Abasi Posted on July 14th, 2018

Redmond, Washington, USA:  Microsoft President Brad Smith on Friday called on Congress to regulate the use of facial recognition technology to protect people’s privacy and freedom of expression. Brad Smith said in a blog post that the U.S. government should form a bipartisan expert commission.

“We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” Mr. Smith wrote. He added: “A world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards.”

“We believe Congress should create a bipartisan expert commission to assess the best way to regulate the use of facial recognition technology in the United States. This should build on recent work by academics and in the public and private sectors to assess these issues and to develop clearer ethical principles for this technology. The purpose of such a commission should include advice to Congress on what types of new laws and regulations are needed, as well as stronger practices to ensure proper congressional oversight of this technology across the executive branch.”

Brad Smith suggested that governments around the world examine both law enforcement and commercial uses of the technology.

“Should law enforcement use of facial recognition be subject to human oversight and controls?” he wrote. “Should the law require that companies obtain prior consent before collecting individuals’ images for facial recognition?”

It’s the first big tech company to raise a seriou alarm about an increasingly sought-after technology for recognizing a person’s face from a photo or through a camera.

Brad Smith says Microsoft, which supplies face recognition to some businesses, has already rejected some customers’ requests to deploy the technology in situations involving “human rights risks.”

Brad Smith defended the company’s contract with US Immigration and Customs Enforcement, saying it doesn’t involve face recognition.

“Finally, as we think about the evolving range of technology uses, we think it’s important to acknowledge that the future is not simple. A government agency that is doing something objectionable today may do something that is laudable tomorrow. We therefore need a principled approach for facial recognition technology, embodied in law, that outlasts a single administration or the important political issues of a moment.

Even at a time of increasingly polarized politics, we have faith in our fundamental democratic institutions and values. We have elected representatives in Congress that have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law. Given the global sweep of this technology, we’ll need to address these issues internationally, in no small part by working with and relying upon many other respected voices. We will all need to work together, and we look forward to doing our part.” Brad Smith wrote.

Mr. Smith’s appeal also comes as Silicon Valley is facing withering scrutiny from lawmakers and privacy experts. Several companies have been harshly criticized in recent months for their role in spreading false information during the 2016 election, and exploiting users’ personal data. In response, some businesses, like Facebook, have expressed more openness to regulation of practices like political advertising.

With many of its rivals under fire, Microsoft has aggressively tried to position itself as the moral compass of the industry. Company executives have been outspoken about safeguarding users’ privacy as well as warning about the potential discriminatory effects of using automated algorithm to make important decisions like hiring.

The powerful technology can be used to identify people in photos or video feeds without their knowledge or permission. Proponents see it as a potentially important tool for identifying criminals, but civil liberties experts have warned that the technology could enable mass surveillance, hindering people’s ability to freely attend political protests or go about their day-to-day lives in anonymity.

In April, privacy groups filed a complaint with the Federal Trade Commission saying that Facebook had turned on new face-matching services without obtaining appropriate permission of users. Facebook has denied the groups’ accusations.

In May, the American Civil Liberties Union and other civil rights groups asked Amazon to stop selling its face-matching service, Rekognition, to law enforcement agencies.

In calling for government oversight of facial recognition, Microsoft may be trying to get ahead of any new state efforts to tightly regulate the technology.

A tough new data protection law in the European Union generally prohibits companies from collecting the biometric data needed for facial recognition without first obtaining users’ specific consent. Illinois has similar restrictions.

Civil liberties and privacy advocates said they both welcomed and felt wary of Microsoft’s push for government regulation, questioning how committed the company was to strong user privacy controls.

In May, for instance, Satya Nadella, Microsoft’s chief executive, said at a company developer conference that privacy was a “human right.” Yet in June, Microsoft donated $195,000 to an effort to defeat a consumer privacy bill in California.

“People have a right to go about their lives without having their faces scanned in secret — by companies or the government,” said Alvaro Bedoya, director of the Center on Privacy & Technology at Georgetown Law, who has studied facial recognition. “Will Microsoft agree that companies should never scan your face without your permission? Will it agree that government face scans should be tightly controlled and in some cases banned?”

April Isenhower, a spokeswoman for Microsoft, said that the company had long been committed to privacy, including pushing for a national consumer privacy law in the United States since 2005.

Tech companies are spreading facial recognition in part because it provides a powerful way for them to connect consumers’ online and real lives.

Over the last few years, Amazon, Apple, Facebook, Google and Microsoft have each filed face recognition patents. Last year, Apple introduced Face ID, a service that enables iPhone X owners to unlock their phones with their face. Many Windows laptops have a similar feature.

Earlier this year, Google’s Art & Culture app created a craze after it added a feature that could match users’ selfies with similar faces in well-known paintings. Google also recently introduced a camera, called Google Clips, with facial recognition.

In addition to using facial recognition for its own consumer services, Microsoft — like Amazon — also sells the software to others.

Microsoft markets technology that can detect faces in photos, as well as facial features like hair color, and emotions like anger or disgust, according to its website. It also sells facial recognition software that “enables you to search, identify, and match faces in your private repository of up to one million people,” the site said. Uber has used the technology to verify drivers’ identities, according to Microsoft marketing materials.

Ms. Isenhower, the Microsoft spokeswoman, declined to answer questions about whether the company provided facial recognition services to other government agencies or whether it had put any specific restrictions on its customers’ use of the technology. She also declined to discuss the company’s position on consumer consent for facial recognition.

How computers see faces and other objects

Computers started to be able to recognize human faces in images decades ago, but now artificial intelligence systems are rivaling people’s ability to classify objects in photos and videos.

That’s sparking increased interest from government agencies and businesses, which are eager to bestow vision skills on all sorts of machines. Among them: self-driving cars, drones, personal robots, in-store cameras and medical scanners that can search for skin cancer. There are also our own phones, some of which can now be unlocked with a glance.

How facial recognition works

Algorithms designed to detect facial features and recognize individual faces have grown more sophisticated since early efforts decades ago.

A common method has involved measuring facial dimensions, such as the distance between the nose and ear or from one corner of the eye to another. That information can then be broken down into numbers and matched to similar data extracted from other images. The closer they are, the better they match.

Such analysis is now aided by greater computing power and huge troves of digital imagery that can be easily stored and shared.

Facial recognition: From faces to objects (and pets)

“Face recognition is an old topic. It’s always been pretty good. What really got everyone’s attention is object recognition,” says Michael Brown, a computer science professor at Toronto’s York University who helps organize the annual Conference on Computer Vision and Pattern Recognition.

Research over the past decade has focused on the development of brain-like neural networks that can automatically “learn” to recognize what’s in an image by looking for patterns in big data sets. But humans continue to help make machines smarter by labeling photos, as happens when Facebook users tag a friend.

An annual image recognition competition that lasted from 2010 to 2017 drew top researchers from companies like Google and Microsoft. Among the revelations: computers can do better than humans at distinguishing between various Welsh corgi breeds, in part because they’re better able to absorb the knowledge it takes to make those distinctions quickly.

But computers have been confused by more abstract forms, such as statues.

Facial recognition: The “coded gaze”

The growing use of face recognition by law enforcement has highlighted longstanding concerns about racial and gender bias.

A screen shows a demonstration of the cognitive level of a facial recognition software at the Ericsson AB booth at the Mobile World Congress Shanghai in Shanghai, China on Thursday, June 28, 2018. /VCG Photo

A study led by MIT computer scientist Joy Buolamwini found that face recognition systems built by companies including IBM and Microsoft were much more likely to misidentify darker-skinned people, especially women. (Buolamwini called this effect “the coded gaze.“) Both Microsoft and IBM recently announced efforts to make their systems less biased by using bigger and more diverse photo repositories to train their software.

Author

Samuel Abasi

Samuel Abasi

Staff Writer
Phone
Email

Leave a Reply