Microsoft laid off its entire ethics and society team in a new round of layoffs that affected 10,000 people, according to Platformer. The ethics and society team inside the artificial intelligence organization was in charge of ensuring that AI principles were followed during product design.
Microsoft has increased its investment in AI innovation. The IT industry also invested billions in its collaboration with OpenAI, an American AI research and implementation firm.
Microsoft has even hinted at a Thursday event about “reinventing productivity with AI,” which is expected to feature its competing Word processor.
The ethics and society team appeared to be an important piece in the company’s goal of making controversial AI products available to the general public. However, concerns were raised when it was revealed that the whole ethics team had been fired.
Microsoft is committed to creating AI products and experiences in a safe and responsible manner
Microsoft, on the other hand, still has an active Office of Responsible AI, which is responsible for developing rules and principles to govern the company’s AI activities. “We are committed to ensuring that AI technologies are created responsibly and in ways that justify people’s trust,” Microsoft says.
As quoted by Platformer, the company said in a statement: “Microsoft is committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.”
It added, “Over the past six years we have increased the number of people across our product teams and within the Office of Responsible AI who, along with all of us at Microsoft, is accountable for ensuring we put our AI principles into practice. […] We appreciate the trailblazing work the ethics and society team did to help us on our ongoing responsible AI journey.”
Meanwhile, Platformer said that staff informed the publication that the ethics and society team was crucial. One former employee was reported as saying, “People would look at the principles coming out of the office of responsible AI and say, ‘I don’t know how this applies’. Our job was to show them and to create rules in areas where there were none.”