Google has trained over 5,000 employees who were part of its customer-facing Cloud teams in asking critical questions to spot potential ethical issues, such as whether an AI application might lead to economic or educational exclusion or cause physical, psychological, social or environmental harm.
In addition to launching the initial ‘Tech Ethics’ training that over 800 Googlers have taken since its launch last year, Google developed a new training for AI Principles issue spotting.
“We piloted the course with more than 2,000 Googlers, and it is now available as an online self-study course to all Googlers across the company,” the company said.
Google recently released a version of this training as a mandatory course for customer-facing Cloud teams and 5,000 Cloud employees have already taken it.
“Our goal is for Google to be a helpful partner not only to researchers and developers who are building AI applications, but also to the billions of people who use them in everyday products,” said the tech giant.
The company said it has released 14 new tools that help explain how responsible AI works, from simple data visualizations on algorithmic bias for general audiences to ‘Explainable AI’ dashboards and tool suites for enterprise users.
The global efforts this year included new programmes to support non-technical audiences in their understanding of, and participation in, the creation of responsible AI systems, whether they are policymakers, first-time ML (machine learning) practitioners or domain experts, said Google.
“We know no system, whether human or AI powered, will ever be perfect, so we don’t consider the task of improving it to ever be finished. We continue to identify emerging trends and challenges that surface in our AI Principles reviews,” said Google.