Privacy Goal: More Controls in Users’ Hands

The chief privacy officers of Microsoft, Facebook and Google today at RSA Conference discussed how their respective companies want to put more privacy controls in users’ hands.

SAN FRANCISCO – The same companies that brought, among other things, facial recognition into your living rooms and the ability to record video to your eyewear, swear the next big thing in privacy is putting more controls in your hand.

The privacy officers of Microsoft, Google and Facebook said Wednesday at RSA Conference 2015 that disruptive movements such as the continued connectivity of home devices, wearables and more will continue to complicate privacy and force technology providers to re-address how much control is transferred to the user.

“We are going to see technology continue to challenge traditional notions of privacy,” said Keith Enright, director of the privacy legal team at Google. “We always struggle to keep up and protect traditional norms. I think we’ll see smart companies double down and invest in user controls and keep users empowered about the way information about them being used.”

For Microsoft, its most prominent example of privacy disruption was the introduction of the Kinect technology for Xbox One, which allows gamers to control their platform using voice commands and gestures. Kinect required considerable privacy discussions during design and implementation, privacy officer Brendon Lynch said, because the technology brought facial recognition to gaming and other activities on the platform. Microsoft’s privacy team, which includes a number of privacy professionals embedded with designers and engineers, helped assess any risks introduced by the new product.

“The privacy team was at the forefront of determining what data collection makes sense, for example. Can we still provide something that’s cool and fun, but with a minimal amount of data coming to us or third parties? Applying core privacy processes led to a lot of design decisions that limited the impact of privacy.”

For example, Lynch said, the decision was made early on not to capture images, but to use geometric points on the face instead, and use an algorithm to create a unique hash of each player’s face—most of whom are children.

“And that stayed local to the device, and not on our server,” Lynch said. “Examples like that are the front line that our privacy team is dealing with. They set standards that can apply broadly to other parts of the company leveraging similar technology.”

For Facebook, the introduction of its newsfeed functionality, which streams updates from one’s friends onto a user’s page, along with the launch of Graph Search two years ago, forced the company to provide users with the ability to manage what information is streamed out, and to whom. The new search capability, for example, makes it simple to find out information or photographs related to an individual.

“We had to engineer the capability to do privacy checks, build those types of controls and bring them inline,” said chief privacy officer Erin Egan, who explained that Facebook has a cross-functionality review team in place that trains every engineer and product manager on privacy commitments made by Facebook to its users. “We created a means with one switch, users could limit the data sent to friends or the ability to un-tag themselves [in a photo]. You have to put controls in place to do that.”

As for Google Glass, internally, engineers and privacy experts believed at first that the disruption potential of the wearable technology was overstated. Enright said that Google learned plenty during the launch of Glass and how to balance user concerns and diverse global privacy laws with the urge to push cool technology to market.

“Glass was for me and Google a tremendous learning experience. It taught us a lot about launching innovative products and helping users integrate technology in their lives in a responsible way and cognizant that technology may impact other people,” Enright said. He added that Google too embeds legal and compliance professionals with product engineers to identify privacy concerns, drive consistent privacy documentation and conduct privacy impact assessments for each new product design.

“A privacy design document is filled out when a new product is conceived where the team describes how the information collected by the new product will be processed, shared and deleted,” Enright said. “That is at work from conception and it’s ongoing in development and implementation so that they’re cognizant about privacy as the product evolves.”

Suggested articles