Social media platform Twitter will examine its machine learning algorithms to identify harmful side effects.
The company’s Responsible ML initiative will be led by its ML Ethics, Transparency and Accountability (META) team. This will assess unintentional harms caused by Twitter’s algorithms and help the company prioritise which issues should be tackled first.
Through the programme, Twitter said it would take responsibility for its algorithmic decisions and would be transparent about its decisions and how it arrives at them. The company also committed to enabling agency and algorithmic choice and ensuring fair outcomes.
Machine learning, the company noted, can affect hundreds of millions of Tweets per day. and sometimes. However, there may be unintended consequences.
Twitter said it is currently assessing the existence of gender and racial bias analysis of its image cropping algorithm.
It also said it will check its home timeline recommendations across racial subgroups and content recommendations for different political ideologies across seven countries.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish ICE” Arcieri 🦀 (@bascule) September 19, 2020
The company will publish an analysis of these harms in the coming months.
Twitter said that the findings may cause it to change how its product, such as removing an algorithm or giving people more control over the images they Tweet.
“Both inside and outside of Twitter, we will share our learnings and best practices to improve the industry’s collective understanding of this topic, help us improve our approach, and hold us accountable,” said a statement from META manager Jutta Williams and Director of Software Engineering Rumman Chowdhury.
- Digital Energy 2021 Virtual Summit | Just one week to go!
- Calls grow for ‘right to disconnect’ as remote working set to stay
- Smart digitisation needed to decarbonise major cities worldwide
Artificial intelligence is undoubtedly a powerful tool. However, the risk of bias creeping in, through poor data sets or poor programming, can be serious.
One major example of this was the controversy that struck education systems across the UK last year. With the pandemic making traditional exams impossible, students were graded by algorithm instead. This took into account the school’s historical performance, estimates from teachers and previous exam results.
The algorithm gave around 40% of students a lower than expected grade. This prompted protests from students, followed by government back-tracking. It also helped bring the ethical ramifications of algorithms into focus.
Since then, a report from the European Union Agency for Fundamental Rights (FRA) warned that AI could pose a threat to civil rights if not curtailed. It said that a clear-cut framework on how to deploy artificial intelligence must be prioritised.