您现在的位置是:百科 >>正文
【】
百科91人已围观
简介Hey, at least Microsoft's news-curating artificial intelligence doesn't have an ego. That much was m ...
Hey, at least Microsoft's news-curating artificial intelligence doesn't have an ego. That much was made clear today after the company's news app highlighted Microsoft's most recent racist failure.
The inciting incident for this entire debacle appears to be Microsoft's late May decision to fire some human editors and journalists responsible for MSN.com and have its AI curate and aggregate stories for the site instead. Following that move, The Guardianreported earlier today that Microsoft's AI confused two members of the pop band Little Mix, who both happen to be women of color, in a republished story originally reported by The Independent. Then, after being called out by band member Jade Thirlwall for the screwup, the AI then published stories about its own failing.
So, to recap: Microsoft's AI made a racist error while aggregating another outlet's reporting, got called out for doing so, and then elevated the coverage of its own outing. Notably, this is after Microsoft's human employees were reportedly told to manually remove stories about the Little Mix incident from MSN.com.
Still with me?
"This shit happens to @leighannepinnock and I ALL THE TIME that it's become a running joke," Thirlwall reportedly wrote in an Instagram story, which is no longer visible on her account, about the incident. "It offends me that you couldn't differentiate the two women of colour out of four members of a group … DO BETTER!"
As of the time of this writing, a quick search on the Microsoft News app shows at least one such story remains.
Notably, Guardian editor Jim Waterson spotted several more examples before they were apparently pulled.
"Microsoft's artificial intelligence news app is now swamped with stories selected by the news robot about the news robot backfiring," he wrote on Twitter.
Tweet may have been deleted
We reached out to Microsoft in an attempt to determine just what, exactly, the hell is going on over there. According to a company spokesperson, the problem is not one of AI gone wrong. No, of course not. It's not like machine learning has a long history of bias (oh, wait). Instead, the spokesperson insisted, the issue was simply that Microsoft's AI selected the wrong photo for the initial article in question.
"In testing a new feature to select an alternate image, rather than defaulting to the first photo, a different image on the page of the original article was paired with the headline of the piece," wrote the spokesperson in an email. "This made it erroneously appear as though the headline was a caption for the picture. As soon as we became aware of this issue, we immediately took action to resolve it, replaced the incorrect image and turned off this new feature."
Unfortunately, the spokesperson did not respond to our question about humanMicrosoft employees deleting coverage of the initial AI error from Microsoft's news platforms.
Microsoft has a troubled recent history when it comes to artificial intelligence and race. In 2016, the company released a social media chatbot dubbed Tay. In under a day, the chatbot began publishing racist statements. The company subsequently pulled Tay offline, attempted to release an updated version, and then had to pull it offline again.
As evidenced today by the ongoing debacle with its own news-curating AI, Microsoft still has some work to do — both in the artificial intelligence and not-being-racistdepartments.
TopicsArtificial IntelligenceMicrosoftRacial Justice
Tags:
转载:欢迎各位朋友分享到网络,但转载请说明文章出处“夫榮妻貴網”。http://new.maomao321.com/news/83a2399893.html
相关文章
You can now play 'Solitaire' and 'Tic
百科Google just added two new fun Easter eggs to its search results.。You can now play。 Solitaire。and。 Ti ...
【百科】
阅读更多球迷聲援C羅:他隻是撞到了拍攝者的手 這是人之常情
百科球迷聲援C羅:他隻是撞到了拍攝者的手 這是人之常情_比賽_手機_視頻www.ty42.com 日期:2022-04-10 10:01:00| 評論(已有340175條評論) ...
【百科】
阅读更多陳婉婷談女超執教首秀:沒想過打分 回去做了檢討
百科陳婉婷談女超執教首秀 :沒想過打分 回去做了檢討_指導_比賽_江蘇www.ty42.com 日期:2022-04-09 13:31:00| 評論(已有340053條評論) ...
【百科】
阅读更多