India's largest platform and marketplace for AI & Analytics leaders & professionals

Sign in

India's largest platform and marketplace for AI & Analytics leaders & professionals

3AI Digital Library

Give AI a ‘positive’ spin: Google tells its scientists

3AI January 19, 2021

Google has reportedly been telling its scientists to give AI a “positive” spin in research papers.

Documents obtained by Reuters suggest that, in at least three cases, Google’s researchers were requested to refrain from being critical of AI technology.

A “sensitive topics” review was established by Google earlier this year to catch papers which cast a negative light on AI ahead of their publication.

Google asks its scientists to consult with legal, policy, and public relations teams prior to publishing anything on topics which could be deemed sensitive like sentiment analysis and categorisations of people based on race and/or political affiliation.

The new review means that papers from Google’s expert researchers which raise questions about AI developments may never be published. Reuters says four staff researchers believe Google is interfering with studies into potential technology harms.

Google recently faced scrutiny after firing leading AI ethics researcher Timnit Gebru.

Gebru is considered a pioneer in the field and researched the risks and inequalities found in large language models. She claims to have been fired by Google over an unpublished paper and sending an email critical of the company’s practices.

In an internal email countering Gebru’s claims, Head of Google Research Jeff Dean wrote:

“We’ve approved dozens of papers that Timnit and/or the other Googlers have authored and then published, but as you know, papers often require changes during the internal review process (or are even deemed unsuitable for submission). 

Unfortunately, this particular paper was only shared with a day’s notice before its deadline — we require two weeks for this sort of review — and then instead of awaiting reviewer feedback, it was approved for submission and submitted.

A cross-functional team then reviewed the paper as part of our regular process and the authors were informed that it didn’t meet our bar for publication and were given feedback about why.”

While it’s one word against another, it’s not a great look for Google.

“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” Reuters reported one of Google’s documents as saying.

On its public-facing website, Google says that its scientists have “substantial” freedom—but that’s increasingly appearing like it’s not the case.

Picture from freepik.com

    3AI Trending Articles

  • Future Proofing GCCs

    For a while now, India has been driving back-office functions for some of the large enterprises across the world. However, over the past few years, it has taken giant strides towards becoming the innovation hub of the world, by adding strategic value to multi-billion-dollar businesses from around the world, through Global Capability Centres (GCCs) based […]

  • CES 2021 conference stress importance of security education

    Experts at the CES 2021 conference stress importance of security education The “second age” of quantum computing is poised to bring a wealth of new opportunities to the cybersecurity industry – but in order to take full advantage of these benefits, the skills gap must be closed. This was the takeaway of a discussion between […]

  • How Analytics is revolutionizing B2B Supply chain around the world

    Featured Article: Author: Saswata Kar, Senior Director and Global Head of Data, Analytics & Data Sciences, Philips GBS This article is about how AI and analytics is really transforming supply chain processes in B2B space. I will be explaining here one use case of Order management analytics, where and how exactly it gets used and […]

  • Using AI to Outwit Malicious AI

    Robust Intelligence is among a crop of companies that offer to protect clients from efforts at deception. IN SEPTEMBER 2019, the National Institute of Standards and Technology issued its first-ever warning for an attack on a commercial artificial intelligence algorithm. Security researchers had devised a way to attack a Proofpoint product that uses machine learning to identify spam emails. The system produced email headers […]