Exclusive

Google makes changes to privacy oversight, worrying policymakers

Sen. Wyden called for the FTC to investigate the tech giant.

Google headquarters is pictured in Mountain View, California.

At least six of Google’s top privacy and regulatory officials have left the company in recent months, and a key oversight team was disbanded, according to interviews and a POLITICO review of records.

That’s leading to concerns among policymakers that one of the world’s most powerful tech companies is releasing new artificial intelligence products without sufficient protections for users. The changes come on top of other cutbacks that had triggered similar worries.

With no strong government rules in place to regulate AI, private companies like Google are largely responsible for policing the potential harms of their own products. The fast growth of AI has raised new worries about the risks to citizens and consumers, from potential use of private data like images and texts without consent, to possible bias in AI systems that make decisions about people’s access to housing, financial assistance and health care.

Google’s machine learning privacy team, a key group that consulted on legal and privacy policy for AI, was disbanded in February, and only two members of the team are still with the company today, according to a current Google engineer, who was granted anonymity to protect their status at the company.

The departing executives include the company’s chief privacy officer, Keith Enright; its director of privacy for product and engineering, Lawrence You; responsible AI operations and governance team founder Jen Gennai; its global chief compliance officer, Spyro Karetsos; its Latin America chief compliance officer, Patricia Godoy Oliveira; and its chief health equity officer, Dr. Ivor Horn.

The company’s Responsible Innovation team, which reviewed the company’s projects to ensure they aligned with its AI ethics principles, was also recently disbanded, according to a WIRED report in January.

Enright, Karetsos and Oliveira did not respond to requests for comment. Gennai, Horn and You declined to comment.

In response to questions from POLITICO, Google said it had not lowered its privacy and AI ethics standards, and had made no changes to the way it evaluates privacy risks with its new product launches.

“While a small number of longtime employees are moving on, most have already been replaced, and in all cases, the work continues,” Google said in a statement. The company said Karetsos and Horn had been replaced.

However, the engineer and a former staffer familiar with the company’s AI development both told POLITICO that since the departures, privacy teams at the company have felt compelled to rubber-stamp projects without as much scrutiny as they would have had before.

Google CEO Sundar Pichai wrote in an internal memo in January that the company planned layoffs “to simplify execution and drive velocity in some areas” such as AI development.

Sen. Ron Wyden (D-Ore.), a privacy hawk on Capitol Hill, said the changes raise questions about whether the company is taking its obligations to the public seriously.

He specifically raised concerns that the recent departures could violate a deal the company has with the Federal Trade Commission: Since 2011, Google has been required to maintain a comprehensive privacy program as part of an FTC settlement after the company allegedly misled users into joining its Buzz social network. Wyden called for the agency to investigate the staffing changes at Google.

“These latest actions show that Google’s priorities have not changed,” he said, “and the FTC should look into whether Google is violating that order.”

The FTC declined to comment about investigating Google’s privacy compliance.

The company said in a statement that the Responsible AI and Machine Learning Privacy teams were shifted into other roles within the company that could be used “most effectively and at scale.”

“We often make team updates as products and needs change,” a Google spokesperson said in an emailed statement. “In fact, we’ve recently increased the number of people working on AI safety and responsibility, as well as regulatory compliance, across the company.”

The former Google staffer said that the new hires do not have the expertise and institutional knowledge that previous leaders held, and also may not feel free to challenge any privacy or ethical concerns that arise.

In the technology industry, which is lightly regulated by the federal government, internal compliance teams are key lines of defense for customer privacy and equity, and for reducing other potential harms.

At Google, privacy and compliance teams can delay or shut down products if they flag ethical or legal risks with its technology, according to current and former staffers at the company.

Last December, WIRED reported that privacy and legal reviews were the only mandatory steps for product launches, and that these teams had exercised meaningful clout in the organization — for example, ending acquisition attempts because of associated privacy risks.

While privacy advocates and government officials consider such teams as essential guardrails, tech companies internally can see them as slowing down the development of new products — a source of friction in a competitive, fast-moving industry like AI.

Another accountability-related team that Google recently cut back is its Legal Investigations Support team, which responds to government requests for user data, the Washington Post reported in June.

Google’s privacy and compliance teams are responsible for ensuring that projects don’t run afoul of laws like the European Union’s General Data Protection Regulation, the continent’s sweeping digital privacy measure, and also consider ethical concerns like biased results stemming from AI output.

“When any large data company, like Google, is perceived as sacrificing consumer privacy safeguards and compliance in the name of innovation, it should be of concern,” said Tricia Enright, spokesperson for Senate Commerce Chair Maria Cantwell.

Cantwell is hosting a hearing on Thursday about the need for a federal data privacy law to set guidelines for AI development, and also supports privacy legislation that would set civil rights protections against algorithmic bias.

As with data privacy regulations, it could be years down the line before Congress enacts any laws governing AI, allowing companies like Google to capitalize on the legal ambiguities.