In a bid to prepare them for the worst, Microsoft has warned investors that its AI offerings could damage the company’s reputation and hurt its image.
This, the Seattle-based company did in the 10-K document that it files annually to the Securities and Exchange Commission. In the filing, the company made it clear that despite enormous progress in machine learning, artificial intelligence is still far from solving all our problems objectively.
Utopian solution it is not.
AI certainly can be unpredictable, and Microsoft already has experience.
Back in 2016, Tay, a chatbot that the company created became a racist, sexist, and quite an unsavory character after Internet users took advantage of her machine learning capabilities. The chatbot was covered in media around the world, and did cause Microsoft some reputational damage.
This is what Redmond warns about:
“AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions.
These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.
Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.”
Notably, this filing was published August 3, 2018.
And comes after a research paper by MIT Media Lab graduate researcher Joy Buolamwini, who showed in February of that year that Microsoft’s facial recognition algorithm was less accurate for women and people of color.
In response, Microsoft updated its facial recognition models, and wrote a blog post about how it was addressing bias in its software.
Several companies have been criticized for unethical AI developments, including Microsoft competitors like Google and Amazon. And this much is clear, we are far off still, from completely relying on AI technologies to carry out autonomous tasks.