Tweet This

SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. The conference will explore Facebook’s new technology initiatives and products. (Photo by Justin Sullivan/Getty Images)

In an era increasingly heated about race and society, the delicate intersection of this cultural phenomenon with that of the tech arena is one that is not only deepening but also causing rising distress and concern.  In fact a recent story in The Washington Post. reported that “minority” groups feel unfairly censored by social media behemoth Facebook, for example, when using the platform for discussions about racial bias. At the same time, groups and individuals on the other end of the race spectrum are quickly being banned and ousted in a flash from various social media networks. Most all of such activity begins with an algorithm, a set of computer code that, for all intents and purposes for this piece, is created to raise a red flag when certain speech is used on a site.  But from engineer mindset to tech limitation, just how much faith should we be placing in algorithms when it comes to the very sensitive area of digital speech and race, and what does the future hold?

To answer this question, it’s helpful to first go back a bit to that Post article. Among many other incidences, it is revealed that a Boston mother, Francie Latour, was shocked and hurt when Facebook deleted her post that recounted the racial slurs of which her sons had been recent targets. As a result of such action by the social network, Latour felt that she had received a double-dose of insult – that of the physical world and then afterward from within the digital world. The post was later restored once good old-fashioned things known as humans at the social media giant reviewed what an algorithm determined was a violation of the company’s policy.

Indeed, while Facebook head Mark Zuckerberg reportedly eyes political ambitions within an increasingly brown America in which his own company consistently has issues creating racial balance, there are questions around policy and development of such algorithms. In fact, Malkia Cyril executive director for the Center for Media Justice  told the Post  that she believes that Facebook has a double standard when it comes to deleting posts. And she has been part of an assemby that has met with the company about the issue.

"Our group of Black Lives Matter activists actually met with Facebook representatives in February 2016, not 2014 as it says in The Washington Post article,  to discuss the appalling levels of resentment, racist insults and violent threats we were receiving from strangers on the Facebook platform while other racial epithets were allowed to stand," Cyril explains. "The meeting was a good first step, but very little was done in the direct aftermath.  Even then, Facebook executives, largely white, spent a lot of time explaining why they could not do more instead of working with us to improve the user experience for everyone.”

Fast forward a year and a half later after Cyril’s meeting and leaked rules from Facebook about hate speech appear as a result of a major investigation by ProPublica.

Not just Facebook, but any and all tech platforms where race discussion occurs are seemingly at a crossroads and under various scrutiny in terms of management, standards and  policy about this sensitive area. The main question is how much of this imbalance is deliberate and how much is just a result of how algorithms naturally work?

“>

SAN JOSE, CA – APRIL 18: Facebook CEO Mark Zuckerberg delivers the keynote address at Facebook’s F8 Developer Conference on April 18, 2017 at McEnery Convention Center in San Jose, California. The conference will explore Facebook’s new technology initiatives and products. (Photo by Justin Sullivan/Getty Images)

In an era increasingly heated about race and society, the delicate intersection of this cultural phenomenon with that of the tech arena is one that is not only deepening but also causing rising distress and concern.  In fact a recent story in The Washington Post. reported that “minority” groups feel unfairly censored by social media behemoth Facebook, for example, when using the platform for discussions about racial bias. At the same time, groups and individuals on the other end of the race spectrum are quickly being banned and ousted in a flash from various social media networks. Most all of such activity begins with an algorithm, a set of computer code that, for all intents and purposes for this piece, is created to raise a red flag when certain speech is used on a site.  But from engineer mindset to tech limitation, just how much faith should we be placing in algorithms when it comes to the very sensitive area of digital speech and race, and what does the future hold?

To answer this question, it’s helpful to first go back a bit to that Post article. Among many other incidences, it is revealed that a Boston mother, Francie Latour, was shocked and hurt when Facebook deleted her post that recounted the racial slurs of which her sons had been recent targets. As a result of such action by the social network, Latour felt that she had received a double-dose of insult – that of the physical world and then afterward from within the digital world. The post was later restored once good old-fashioned things known as humans at the social media giant reviewed what an algorithm determined was a violation of the company’s policy.

Indeed, while Facebook head Mark Zuckerberg reportedly eyes political ambitions within an increasingly brown America in which his own company consistently has issues creating racial balance, there are questions around policy and development of such algorithms. In fact, Malkia Cyril executive director for the Center for Media Justice  told the Post  that she believes that Facebook has a double standard when it comes to deleting posts. And she has been part of an assemby that has met with the company about the issue.

“Our group of Black Lives Matter activists actually met with Facebook representatives in February 2016, not 2014 as it says in The Washington Post article,  to discuss the appalling levels of resentment, racist insults and violent threats we were receiving from strangers on the Facebook platform while other racial epithets were allowed to stand,” Cyril explains. “The meeting was a good first step, but very little was done in the direct aftermath.  Even then, Facebook executives, largely white, spent a lot of time explaining why they could not do more instead of working with us to improve the user experience for everyone.”

Fast forward a year and a half later after Cyril’s meeting and leaked rules from Facebook about hate speech appear as a result of a major investigation by ProPublica.

Not just Facebook, but any and all tech platforms where race discussion occurs are seemingly at a crossroads and under various scrutiny in terms of management, standards and  policy about this sensitive area. The main question is how much of this imbalance is deliberate and how much is just a result of how algorithms naturally work?

Let’s block ads! (Why?)


Source link

Load More By elspoka
Load More In Tech

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Check Also

Tips For Playing Red Dead Redemption 2

Red Dead Redemption 2 is an absurdly big game, full of secrets, systems, and hidden surpri…