subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now
Picture: REUTERS/ALY SONG
Picture: REUTERS/ALY SONG

Seattle/New York/San Francisco —  Timnit Gebru, a co-leader of the Ethical Artificial Intelligence team at Google, said she was fired for sending an e-mail that management deemed “inconsistent with the expectations of a Google manager.”

The e-mail and the firing were the culmination of about a week of wrangling over the company’s request that Gebru retract an AI ethics paper she had co-written with six others, including four Google employees, that was submitted for consideration for an industry conference next year, Gebru said in an interview on Thursday. If she would not retract the paper, Google at least wanted the names of the Google employees removed.

Gebru asked Google Research vice-president Megan Kacholia for an explanation and told her that without more discussion on the paper and the way it was handled she would plan to resign after a transition period. She also wanted to make sure she was clear on what would happen with future, similar research projects her team might undertake.

“We are a team called Ethical AI, of course we are going to be writing about problems in AI,” she said.

Meanwhile, Gebru had chimed in on an e-mail group for company researchers called Google Brain Women and Allies, commenting on a report others were working on about how few women had been hired by Google during the pandemic. Gebru said that in her experience, such documentation was unlikely to be effective as no-one at Google would be held accountable. She referred to her experience with the AI paper submitted for the conference and linked to the report.

“Stop writing your documents because it doesn’t make a difference,” Gebru wrote in the e-mail, which was obtained by Bloomberg News. “There is no way more documents or more conversations will achieve anything.” The e-mail was published earlier by tech writer Casey Newton.

Fired by e-mail

The next day, Gebru said she was fired by e-mail, with a message from Kacholia saying that Google could not meet her demands and respects her decision to leave the company as a result. The e-mail went on to say that “certain aspects of the e-mail you sent last night to non-management employees in the brain group reflect behaviour that is inconsistent with the expectations of a Google manager,” according to tweets Gebru posted, which also specifically called out Jeff Dean, who heads Google’s AI division, for being involved in her removal.

“It’s the most fundamental silencing,” Gebru said in the interview, about Google’s actions in regard to her paper. “You can’t even have your scientific voice.”

Representatives for Mountain View, California-based Google did not reply to multiple requests for comment.

Google protest group Google Walkout For Real Change posted a petition in support of Gebru on Medium, which has gathered more than 400 signatures from employees at the company and more than 500 from academic and industry figures.

The research paper in question deals with possible ethical issues of large language models, a field of research being pursued by OpenAI, Google and others. Gebru said she does not know why Google had concerns about the paper, which she said was approved by her manager and submitted to others at Google for comment.

Google requires all publications by its researchers to be granted prior approval, Gebru said, and the company told her this paper had not followed proper procedure. The report was submitted to the ACM Conference on Fairness, Accountability, and Transparency, a conference Gebru cofounded to be held in March.

The paper called out the dangers of using large language models to train algorithms that could, for example, write tweets, answer trivia and translate poetry, according to a copy of the document. The models are essentially trained by analysing language from the internet, which does not reflect large swathes of the global population not yet online, according to the paper. Gebru highlights the risk that the models will only reflect the worldview of people who have been privileged enough to be a part of the training data.

Gebru, an alumni of the Stanford Artificial Intelligence Laboratory, is one of the leading voices in the ethical use of AI. She is well-known for her work on a landmark study in 2018 that showed how facial recognition software misidentified dark-skinned women as much as 35% of the time, whereas the technology worked with near precision on white men.

She has also been an outspoken critic of the lack of diversity and unequal treatment of black workers at tech companies, particularly at Alphabet’s Google, and said she believed her dismissal was meant to send a message to the rest of Google’s employees not to speak up.

Under fire

Tensions were already running high at Google’s research division. After the death of George Floyd, a black man who was killed during an arrest by white police officers in Minneapolis in May, the division held an all-hands meeting where black Googlers spoke about their experiences at the company. Many people broke down crying, according to people who attended the meeting.

Gebru disclosed the firing on Wednesday night in a series of tweets, which were met with support from some of her Google co-workers and others in the field.

Gebru is “the reason many next generation engineers, data scientists and more are inspired to work in tech,” wrote Rumman Chowdhury, who formerly served as the head of Responsible AI at Accenture Applied Intelligence and now runs a company she founded called Parity.

Google, along with other US tech giants including Amazon.com and Facebook, has been under fire from the government for claims of bias and discrimination and has been questioned about its practices at several committee hearings in Washington.

A year ago, Google fired four employees for what it said were violations of data-security policies. The dismissals highlighted already escalating tensions between management and activist workers at a company once revered for its open corporate culture. Gebru took to Twitter at the time to support those who lost their jobs.

For the web search giant, Gebru’s alleged termination comes as the company faces a complaint from the National Labor Relations Board for unlawful surveillance, interrogation or suspension of workers.

Earlier this week Gebru inquired on Twitter whether anyone was working on regulation to protect ethical AI researchers, similar to whistle-blower protections. “With the amount of censorship and intimidation that goes on towards people in specific groups, how does anyone trust any real research in this area can take place?” she wrote on Twitter.

Rare voice

Gebru was a rare voice of public criticism from inside the company. In August, Gebru told Bloomberg News that black Google employees who speak out are criticised even as the company holds them up as examples of its commitment to diversity. She recounted how co-workers and managers tried to police her tone, make excuses for harassing or racist behaviour, or ignore her concerns.

When Google was considering having Gebru manage another employee, she said in an August interview, her outspokenness on diversity issues was held against her, and concerns were raised about whether she could manage others if she was so unhappy. “People don’t know the extent of the issues that exist because you can’t talk about them, and the moment you do, you’re the problem,” she said at the time.

Bloomberg

subscribe Support our award-winning journalism. The Premium package (digital only) is R30 for the first month and thereafter you pay R129 p/m now ad-free for all subscribers.
Subscribe now

Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.

Speech Bubbles

Please read our Comment Policy before commenting.